Fw:

2017-11-16 Thread Patrick


Hello,

You can avail Flat 50% Off on all Airlines & Hotels.

There are no hidden costs etc.

We're able to offer you such a high discount because we buy airline miles and travel vouchers from market at very low rates.

Limited Promotion. Call Toll Free (855) 425-6766

Thanks,

Patrick

Travel USA





Re: Empty IP when forwardfor enabled

2009-01-18 Thread Patrick Viet
On Mon, Jan 19, 2009 at 3:19 AM, Rodrigo elro...@gmail.com wrote:

 LogFormat %{X-Forwarded-For}i  %l %u %t \%r\ %s %b \%{Referer}i\
 \%{User-Agent}i\ combined

 but many times the IP field appears empty. I've read on this mailling list
 that it has something to do with KeepAliveTimeout on Apache. I rised up it
 from 6 to 15, but no luck.

 How could I fix this?

Hi Rodrigo

Use mod_rpaf, normal log config, and you'll be fine.
The variable doesn't appear in the next requests because haproxy
doesn't handle keepalive yet. So next requests are passed unchanged.

Or as suggested Rainer, don't use keepalive : option httpclose. But
that's a lower performance (for the end-user) thing : IE gets very
slow without keepalive.

--
  Patrick



Re: Client IPs logging and/or transparent

2009-01-31 Thread Patrick Viet
I would rather say, patch haproxy so that it not only sends
x-forwarded-for but also x-forwarded-for-sourceport.
Patrick

On Sat, Jan 31, 2009 at 4:48 AM, John Lauro john.la...@covenanteyes.com wrote:
 Hello,



 Running mode tcp in case that makes a difference for any comments, as I know
 there are others options for http…



 I need to preserve for auditing the IP address of the clients and be able to
 associate it with a session.  One problem, it appears the client IP and port
 are logged, however it appears that only the final server is logged, but not
 the source port for the outgoing connection.  In theory, assuming ntp in
 sync, I should be able to tie the logs together if I had the port number
 that was used in the outgoing connection.  Is there some way to turn this
 on, or am I just missing it from the logged line?



 The other option appears to be to setup haproxy act transparently.  This
 appears to be rather involved and sparse on details.  Based on examples I
 found on using squid with it, it appears to be more involved then just
 updating kernel.  If anyone can post some hints on their setup with haproxy
 (sample config files and sample iptables (or are they not required))  that
 would be great.  If there is a yum repository with a patched kernel and
 other bits ready to install that would be even better.



 In some ways it looks rather messy to setup and support, but IP tracking is
 important.









Re: Apache Error Log, X-Forwarded-For

2009-03-30 Thread Patrick Viet
Use mod_rpaf and apache will to REMOTE_HOST = X-Forwarded-for in all
its processings.
stderr.net/rpaf or something like that for the URL.

Patrick

On Mon, Mar 30, 2009 at 11:43 PM, Will Buckner w...@chegg.com wrote:
 Hey guys,

 All of my Apaches are logging the load balancer's IP in the error log for
 source IP. I have changed the LogFormat to use the X-Forwarded-For header
 for the access log, but is there any way to get the correct IP in the error
 log? Unless I'm missing something, there is no LogFormat for the error log.



L'histoire restituee de la Franc-maçonnerie

2010-04-28 Thread patrick . boistier








--
Powered by PHPlist, www.phplist.com --


inline: powerphplist.png

[PATCH] doc/configuration.txt: fix typos

2010-05-09 Thread Patrick Mézard
---
 doc/configuration.txt |7 +++
 1 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index e1d5b71..a3d4ac4 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2859,7 +2859,7 @@ no option httpclose
  yes   |yes   |   yes  |   yes
   Arguments : none
 
-  As stated in section 1, HAProxy does not yes support the HTTP keep-alive
+  As stated in section 1, HAProxy does not yet support the HTTP keep-alive
   mode. So by default, if a client communicates with a server in this mode, it
   will only analyze, log, and process the first request of each connection. To
   workaround this limitation, it is possible to specify option httpclose. It
@@ -5217,7 +5217,7 @@ timeout http-keep-alive timeout
   without waiting for further clicks. Also, if set to a very small value (eg:
   1 millisecond) it will probably only accept pipelined requests but not the
   non-pipelined ones. It may be a nice trade-off for very large sites running
-  with tends to hundreds of thousands of clients.
+  with tens to hundreds of thousands of clients.
 
   If this parameter is not set, the http-request timeout applies, and if both
   are not set, timeout client still applies at the lower level. It should be
@@ -6348,8 +6348,7 @@ url_sub string
 
 Some predefined ACLs are hard-coded so that they do not have to be declared in
 every frontend which needs them. They all have their names in upper case in
-order to avoid confusion. Their equivalence is provided below. Please note that
-only the first three ones are not layer 7 based.
+order to avoid confusion. Their equivalence is provided below.
 
 ACL name  Equivalent toUsage
 ---+-+-
-- 
1.6.6




haproxy 1.4.7 and keep-alive

2010-06-11 Thread Patrick Mézard
Hello,

Sorry for the long email, I know there had been many threads about keep-alive 
already, but I am a little confused by 1.4.7 configuration.txt 
(http://haproxy.1wt.eu/download/1.4/doc/configuration.txt). I'd be happy to 
improve the documentation if you help me clarify a couple of points:

1- 1.1. The HTTP transaction model says: 
---
HAProxy currently only supports the HTTP keep-alive mode on the client side, 
and transforms it to a close mode on the server side.
---
I understand HAProxy sequences the incoming client requests and dispatches them 
with a Connection: close header to the backends, applying dispatching rules 
for every incoming request. Then option http-server-close seems to do exactly 
this. Is it correct to say that option http-server-close is HAProxy default 
behaviour? Or does HAProxy still works in a parse first request header and 
consider everything else data mode when this option is not set?

2- option httpclose documentation states:

As stated in section 1, HAProxy does not yet support the HTTP keep-alive mode. 
So by default, if a client communicates with a server in this mode, it will 
only analyze, log, and process the first request of each connection.

Who is right here, 1.1 or option httpclose? My understanding is option 
httpclose adds Connection: close header to requests from proxy to server and 
to responses from proxy to client.

3- option forceclose says:

When this happens, it is possible to use option forceclose. It will actively 
close the outgoing server channel as soon as the server has finished to 
respond. This option implicitly enables the httpclose option.

Why is actively closing proxy to server connections related to closing the 
related proxy to client connection? Is it an implementation issue or by design?

4- Finally, except for resource consumptions and sometimes bad server 
behaviours (addressed by the pretend-keep-alive option), are there any other 
drawbacks to enabling client side keep-alive with option http-server-close 
only, other surprises? (I will probably set a 250ms http-keep-alive timeout 
too, which seems to be a good tradeof).

Thank you for any comment on these points.

--
Patrick Mézard



Re: haproxy 1.4.7 and keep-alive

2010-06-12 Thread Patrick Mézard
Le 12/06/10 07:09, Willy Tarreau a écrit :
 On Fri, Jun 11, 2010 at 11:40:18PM +0200, Patrick Mézard wrote:
 Hello,

[... snip, thanks for the answers ...]

 3- option forceclose says:
 
 When this happens, it is possible to use option forceclose. It will 
 actively close the outgoing server channel as soon as the server has 
 finished to respond. This option implicitly enables the httpclose option.
 
 Why is actively closing proxy to server connections related to closing the 
 related proxy to client connection? Is it an implementation issue or by 
 design?
 
 It is by design. Some servers ignored the Connection: close request
 header so even with httpclose you sometimes ended up with long
 connections. This was a disaster because the client waited for the
 server to close and the server did not. So the forceclose option
 was meant to send an active close to the server as soon as it began
 to respond, so that the end of the server's response caused an
 immediate close of this connection, forwarded to the client. But
 some servers did not accept that very well (fortunately, they did
 not need it). Since introduction of the keep-alive support, the
 option has been reworked so that the active close is sent when
 the server has finished responding. For compatibility with previous
 implementations, the close is still propagated to the client.
 
 So this is not even an issue, it's the required behaviour so that
 we don't break existing setups. The issues, if any, are the
 environments that require this option. Sometimes it can be replaced
 by http-server-close, sometimes it cannot due to buggy clients which
 wait for the close.

Ok, so this is for compatibility reasons. My point was it makes sense to 
actively close proxy/server connections if the servers do not honor 
Connection: close while maintaining persistent connections on the client 
side. Calling this option forcecloseserver, we would have option forceclose 
== option forcecloseserver + option httpclose. Just trying to build an 
accurate mental model of what's going on.

I will send patches to summarize these behaviours for newcomers like me, 
probably at the end of 1.1 section.

By the way, it is not easy to find haproxy git repositories. I think you should 
put a reference to them in the beginning of the Download section of the 
website. And cloning them is surprisingly slow (probably because it's done over 
HTTP).

--
Patrick Mézard



[PATCH 1 of 3] doc: summarize and highlight persistent connections behaviour

2010-06-12 Thread Patrick Mezard
# HG changeset patch
# User Patrick Mezard pmez...@gmail.com
# Date 1276351177 -7200
# Node ID e62ef7ba49a979f308fc9ff653e4dfb51652e1f0
# Parent  502d8a7ee3377e176347f3c5b6c2718d0ee855a3
doc: summarize and highlight persistent connections behaviour

diff --git a/doc/configuration.txt b/doc/configuration.txt
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -127,8 +127,7 @@
 Its advantages are a reduced latency between transactions, and less processing
 power required on the server side. It is generally better than the close mode,
 but not always because the clients often limit their concurrent connections to
-a smaller value. HAProxy currently only supports the HTTP keep-alive mode on
-the client side, and transforms it to a close mode on the server side.
+a smaller value.
 
 A last improvement in the communications is the pipelining mode. It still uses
 keep-alive, but the client does not wait for the first response to send the
@@ -142,8 +141,17 @@
 correctly support pipelining since there is no way to associate a response with
 the corresponding request in HTTP. For this reason, it is mandatory for the
 server to reply in the exact same order as the requests were received.
-HAProxy supports pipelined requests on the client side and processes them one
-at a time.
+
+By default HAProxy operates in a tunnel-like mode with regards to persistent
+connections: for each connection it processes the first request and forwards
+everything else (including additional requests) to selected server. Once
+established, the connection is persisted both on the client and server
+sides. Use option http-server-close to preserve client persistent connections
+while handling every incoming request individually, dispatching them one after
+another to servers, in HTTP close mode. Use option httpclose to switch both
+sides to HTTP close mode. option forceclose and option
+http-pretend-keepalive help working around servers misbehaving in HTTP close
+mode.
 
 
 1.2. HTTP request
@@ -2791,16 +2799,18 @@
  yes   |yes   |   yes  |   yes
   Arguments : none
 
-  This mode enables HTTP connection-close mode on the server side while keeping
-  the ability to support HTTP keep-alive and pipelining on the client side.
-  This provides the lowest latency on the client side (slow network) and the
-  fastest session reuse on the server side to save server resources, similarly
-  to option forceclose. It also permits non-keepalive capable servers to be
-  served in keep-alive mode to the clients if they conform to the requirements
-  of RFC2616. Please note that some servers do not always conform to those
-  requirements when they see Connection: close in the request. The effect
-  will be that keep-alive will never be used. A workaround consists in enabling
-  option http-pretend-keepalive.
+  By default, when a client communicates with a server, HAProxy will only
+  analyze, log, and process the first request of each connection. Setting
+  option http-server-close enables HTTP connection-close mode on the server
+  side while keeping the ability to support HTTP keep-alive and pipelining on
+  the client side.  This provides the lowest latency on the client side (slow
+  network) and the fastest session reuse on the server side to save server
+  resources, similarly to option forceclose. It also permits non-keepalive
+  capable servers to be served in keep-alive mode to the clients if they
+  conform to the requirements of RFC2616. Please note that some servers do not
+  always conform to those requirements when they see Connection: close in the
+  request. The effect will be that keep-alive will never be used. A workaround
+  consists in enabling option http-pretend-keepalive.
 
   At the moment, logs will not indicate whether requests came from the same
   session or not. The accept date reported in the logs corresponds to the end
@@ -2818,8 +2828,8 @@
   If this option has been enabled in a defaults section, it can be disabled
   in a specific instance by prepending the no keyword before it.
 
-  See also : option forceclose, option http-pretend-keepalive and
- option httpclose.
+  See also : option forceclose, option http-pretend-keepalive, option
+ httpclose and 1.1. The HTTP transaction model.
 
 
 option http-use-proxy-header
@@ -2911,15 +2921,13 @@
  yes   |yes   |   yes  |   yes
   Arguments : none
 
-  As stated in section 1, HAProxy does not yet support the HTTP keep-alive
-  mode. So by default, if a client communicates with a server in this mode, it
-  will only analyze, log, and process the first request of each connection. To
-  workaround this limitation, it is possible to specify option httpclose. It
-  will check if a Connection: close header is already set in each direction,
-  and will add one if missing. Each end should react to this by actively
-  closing the TCP connection after each transfer, thus resulting

[PATCH 2 of 3] doc: mention 'option http-server-close' effect in Tq section

2010-06-12 Thread Patrick Mezard
# HG changeset patch
# User Patrick Mezard pmez...@gmail.com
# Date 1276351667 -7200
# Node ID 665a0f2365f59ddf58e862ff8ccf830568c89fb9
# Parent  e62ef7ba49a979f308fc9ff653e4dfb51652e1f0
doc: mention 'option http-server-close' effect in Tq section

diff --git a/doc/configuration.txt b/doc/configuration.txt
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -7300,7 +7300,9 @@
 spent accepting these connections will inevitably slightly delay processing
 of other connections, and it can happen that request times in the order of
 a few tens of milliseconds are measured after a few thousands of new
-connections have been accepted at once.
+connections have been accepted at once. Setting option http-server-close
+may display larger request times since Tq also measures the time spent
+waiting for additional requests.
 
   - If Tc is close to 3000, a packet has probably been lost between the
 server and the proxy during the server connection phase. This value should



[PATCH 3 of 3] doc: add configuration samples

2010-06-12 Thread Patrick Mezard
# HG changeset patch
# User Patrick Mezard pmez...@gmail.com
# Date 1276354342 -7200
# Node ID 771c93c657b8fca58f6ae8eee7f8bbfa8e02c03c
# Parent  665a0f2365f59ddf58e862ff8ccf830568c89fb9
doc: add configuration samples

configuration.txt is thorough and accurate but lacked sample configurations
clarifying both the syntax and the relations between global, defaults,
frontend, backend and listen sections. Besides, almost all examples to be found
in haproxy-en.txt or online tutorials make use of the 'listen' syntax while
'frontend/backend' is really the one to know about.

diff --git a/doc/configuration.txt b/doc/configuration.txt
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -37,6 +37,7 @@
 2.Configuring HAProxy
 2.1.  Configuration file format
 2.2.  Time format
+2.3.  Examples
 
 3.Global parameters
 3.1.  Process management and security
@@ -370,6 +371,52 @@
   - d  : days.1d = 24h = 1440m = 86400s = 8640ms
 
 
+2.3. Examples
+-
+
+# Simple configuration for an HTTP proxy listening on port 80 on all
+# interfaces and forwarding requests to a single backend servers with a
+# single server server1 listening on 127.0.0.1:8000
+global
+daemon
+maxconn 256
+
+defaults
+mode http
+timeout connect 5000ms
+timeout client 5ms
+timeout server 5ms
+
+frontend http-in
+bind *:80
+default_backend servers
+
+backend servers
+server server1 127.0.0.1:8000 maxconn 32
+
+
+# The same configuration defined with a single listen block. Shorter but
+# less expressive, especially in HTTP mode.
+global
+daemon
+maxconn 256
+
+defaults
+mode http
+timeout connect 5000ms
+timeout client 5ms
+timeout server 5ms
+
+listen http-in
+bind *:80
+server server1 127.0.0.1:8000 maxconn 32
+
+
+Assuming haproxy is in $PATH, test these configurations in a shell with:
+
+$ sudo haproxy -f configuration.conf -d
+
+
 3. Global parameters
 
 



Re: nice wiki doc of haproxy

2011-06-15 Thread Patrick Mézard
Le 15/06/11 23:09, Willy Tarreau a écrit :
 On Wed, Jun 15, 2011 at 04:37:16PM -0400, James Bardin wrote:
 Just throwing my $.02; how about converting the documentation to
 something more easily parse-able, like markdown?
 
 You mean the mainline doc ? If so, it's been discussed to great length in
 the past, and the short answer is no. The documentation is always what
 lags behind code. Whatever barrier you put in front of doc contribs will
 simply slow them down. I've experienced this a lot in the past with the
 french doc version which was never updated in contribs. Here I already
 see what will happen : I've updated the doc but I'm not sure about the
 end results as I don't have the tool to regenerate it. The current format
 leaves no excuse for this. I want a human to be able to parse it and to
 have to conform to very few rules.
 
 What I want is a *smart* converter which understands the doc format. Once
 we spot the real issues (ambiguous situations where we cannot guess), then
 we'll fix the format and add a few rules.
 
 Also, I regularly receive support from people who work on production
 systems where they have no choice but reading the doc in the 80x25 text
 console with vi or less. I absolutely want the doc to be optimally
 usable there. This means no tags, no long lines, etc... Just plain
 text formatted right and convertible to other formats for a nicer
 experience when it's possible.
 
 BTW, a man output would be nice too ;-)
 
 I remember about asciidoc and such which were able to produce various
 formats, and which could serve as an intermediary format later when
 we're able to parse what we have.

markdown and asciidoc pretty are similar, and human readable.

That said I agree with everything you said. For instance, Mercurial 
documentation was migrated from standalone text file to simplified 
reStructuredText version (a popular format in python world) mostly embedded in 
source code and this has been a huge win. We are now able to generate 
documentation in different formats (console output, manpages, HTML), display 
the same documentation from the CLI or web interface and even generate 
documentation dynamically for features brought by third-parties plugins.

--
Patrick Mézard



[PATCH] DOC: mention that default checks are TCP connections

2012-01-22 Thread Patrick Mézard
---
 doc/configuration.txt |   20 ++--
 1 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 3e777fa..7fd570c 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -6610,16 +6610,16 @@ backup
 
 check
   This option enables health checks on the server. By default, a server is
-  always considered available. If check is set, the server will receive
-  periodic health checks to ensure that it is really able to serve requests.
-  The default address and port to send the tests to are those of the server,
-  and the default source is the same as the one defined in the backend. It is
-  possible to change the address using the addr parameter, the port using the
-  port parameter, the source address using the source address, and the
-  interval and timers using the inter, rise and fall parameters. The
-  request method is define in the backend using the httpchk, smtpchk,
-  mysql-check, pgsql-check and ssl-hello-chk options. Please refer to
-  those options and parameters for more information.
+  always considered available. If check is set, the server is available when
+  accepting periodic TCP connections, to ensure that it is really able to serve
+  requests. The default address and port to send the tests to are those of the
+  server, and the default source is the same as the one defined in the
+  backend. It is possible to change the address using the addr parameter, the
+  port using the port parameter, the source address using the source
+  address, and the interval and timers using the inter, rise and fall
+  parameters. The request method is define in the backend using the httpchk,
+  smtpchk, mysql-check, pgsql-check and ssl-hello-chk options. Please
+  refer to those options and parameters for more information.
 
   Supported in default-server: No
 
-- 
1.7.3.4




add dynamic header to http response?

2013-05-07 Thread Patrick Hemmer
With haproxy 1.5, Is there any way to add a dynamic header to the http
response (like the `http-request add-header` option for request headers)?
I'm adding a X-Request-Id header to requests before forwarding them on
to the back end, but would also like to be able to send this same header
back in the response to the client. Something like the `http-request
add-header` or `unique-id-header` options would be great. The former
might be more flexible as it can be used for other things, but the
latter would also work.

Reading through the docs I don't see any way this could be done. If not
can it be a feature request?

Thanks

-Patrick



syslog timestamp with millisecond

2013-05-10 Thread Patrick Hemmer
The current syslog implementation (via UDP) sends log entries with the
millisecond portion of the timestamp stripped off. Our log collector is
capable of handling timestamps with millisecond accuracy and I would
like to have it do so. Is there any way to accomplish this?

I know you can add an additional timestamp into the log message, but the
log collector uses the syslog timestamp as _the_ timestamp.

Haproxy 1.5 custom log format has access to the date/time, hostname, and
pid, so it would be nice to just have haproxy not add anything to the
log entries other than facility  priority, and then let the custom log
format add the date, host, program, and pid.

-Patrick



haproxy duplicate http_request_counter values

2013-08-11 Thread Patrick Hemmer
I'm using the %rt field in the unique-id-format config parameter (the
full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
one specific case, haproxy added the same http_request_counter value to
70 different http requests within a span of 61 seconds (from various
client hosts too). Does the http_request_counter only increment under
certain conditions, or is this a bug?

This is with haproxy 1.5-dev19

-Patrick


Re: haproxy duplicate http_request_counter values (BUG)

2013-08-13 Thread Patrick Hemmer

On 2013/08/11 15:45, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter
 (the full value is %{+X}o%pid-%rt), and am getting lots of
 duplicates. In one specific case, haproxy added the same
 http_request_counter value to 70 different http requests within a span
 of 61 seconds (from various client hosts too). Does the
 http_request_counter only increment under certain conditions, or is
 this a bug?

 This is with haproxy 1.5-dev19

 -Patrick


This appears to be part of a bug. I just experienced a scenario where
haproxy stopped responding. When I went into the log I found binary
garbage in place of the request ID. I have haproxy configured to route
certain URLs, and to respond with a `errorfile` when a request comes in
that doesn't match any of the configure paths. It seems whenever I
request an invalid URL and get the `errorfile` response, the request ID
gets screwed up and becomes jumbled binary data.

For example: haproxy[28645]: 207.178.167.185:49560 api bad_url/NOSRV
71/-1/-1/-1/71 3/3/0/0/3 0/0 127/242 403 PR-- Á + GET / HTTP/1.1
Notice the Á, that's supposed to be the process ID and request ID
separated by a hyphen. When I pipe it into xxd, I get this:

000: 6861 7072 6f78 795b 3238 3634 355d 3a20  haproxy[28645]:
010: 3230 372e 3137 382e 3136 372e 3138 353a  207.178.167.185:
020: 3439 3536 3020 6170 6920 6261 645f 7572  49560 api bad_ur
030: 6c2f 3c4e 4f53 5256 3e20 3731 2f2d 312f  l/NOSRV 71/-1/
040: 2d31 2f2d 312f 3731 2033 2f33 2f30 2f30  -1/-1/71 3/3/0/0
050: 2f33 2030 2f30 2031 3237 2f32 3432 2034  /3 0/0 127/242 4
060: 3033 2050 522d 2d20 90c1 8220 2b20 4745  03 PR-- ... + GE
070: 5420 2f20 4854 5450 2f31 2e31 0a T / HTTP/1.1.


I won't post my entire config as it's over 300 lines, but here's the
juicy stuff:


global
log 127.0.0.1   local0
maxconn 20480
user haproxy
group haproxy
daemon

defaults
log global
modehttp
option  httplog
option  dontlognull
retries 3
option  redispatch
timeout connect 5000
timeout client 6
timeout server 17
option  clitcpka
option  srvtcpka

stats   enable
stats   uri /haproxy/stats
stats   refresh 5
stats   auth my:secret

listen stats
bind 0.0.0.0:90
mode http
stats enable
stats uri /
stats refresh 5

frontend api
  bind *:80
  bind *:81 accept-proxy

  option httpclose
  option forwardfor
  http-request add-header X-Request-Timestamp %Ts.%ms
  unique-id-format %{+X}o%pid-%rt
  unique-id-header X-Request-Id
  rspadd X-Api-Host:\ i-a22932d9

  reqrep ^([^\ ]*)\ ([^\?\ ]*)(\?[^\ ]*)?\ HTTP.*  \0\r\nX-API-URL:\ \2


  acl is_1_1 path_dir /1/my/path
  use_backend 1_1 if is_1_1

  acl is_1_2 path_dir /1/my/other_path
  use_backend 1_2 if is_1_2

  ...

  default_backend bad_url

  log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r

backend bad_url
  block if TRUE
  errorfile 403 /etc/haproxy/bad_url.http


content based routing with rewrite (reqrep)

2013-08-26 Thread Patrick Hemmer
So I'm trying to come up with the best way of doing this, but am having
a heck of a time. Basically I have several different backend service
pools, and I have one externally facing haproxy router. I want to take a
map of public URLs and route them to specific backend URLs.
For example

public.example.com/foo/bar - foo.internal.example.com/one
public.example.com/foo/baz - foo.internal.example.com/two
public.example.com/another - more.internal.example.com/three

So the first 2 public URLs go to the same backend, but need different
rewrite rules.

I've tried doing the following config:
frontend public
  acl foo_bar path_dir /foo/bar
  reqrep ^([^\ ]*\ )/foo/bar(.*) \1/one\2 if foo_bar
  use_backend foo if foo_bar

Except it seems that the foo_bar acl isn't cached, and get's
re-evaluated after doing the reqrep, and so the use_backend fails.

The only way I can think of doing this is to put the acl and the
use_backend in the frontend, and then put the acl again with the reqrep
in the backend. Is there any cleaner way (if it works since I haven't
tried it yet)?

-Patrick


Client timeout on http put shows as a server timeout with error 504

2013-09-17 Thread Patrick Hemmer
We have this case with haproxy 1.5-dev19 where when a client is
uploading data via a HTTP PUT request, the client will fail to send all
it's data and haproxy will timeout the connection. The problem is that
haproxy is reporting this an error 504 and connection flags of sH--,
meaning it timed out waiting for the server.

Now I've analyzed the http headers, and the PUT request has a
content-length header, so would it be possible to have haproxy report
these as a client side timeout instead of a server side timeout (when
the amount of data after headers is less than the amount indicated in
the content-length header)? And with a 4XX status code as well.
We have monitoring in place which looks for server errors, and I'd love
for it not to pick up client problems.

-Patrick


Re: Client timeout on http put shows as a server timeout with error 504

2013-09-18 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2013-09-18 01:46:50 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: Client timeout on http put shows as a server timeout with
error 504

 Hi Patrick,

 On Tue, Sep 17, 2013 at 06:29:13PM -0400, Patrick Hemmer wrote:
 We have this case with haproxy 1.5-dev19 where when a client is
 uploading data via a HTTP PUT request, the client will fail to send all
 it's data and haproxy will timeout the connection. The problem is that
 haproxy is reporting this an error 504 and connection flags of sH--,
 meaning it timed out waiting for the server.

 Now I've analyzed the http headers, and the PUT request has a
 content-length header, so would it be possible to have haproxy report
 these as a client side timeout instead of a server side timeout (when
 the amount of data after headers is less than the amount indicated in
 the content-length header)? And with a 4XX status code as well.
 We have monitoring in place which looks for server errors, and I'd love
 for it not to pick up client problems.
 I remember having checked for this in the past. I agree that ideally
 we should have a cH--. It's a bit trickier, because in practice it
 is permitted for the server to respond before the end. In fact we'd
 need another state before the Headers state, which is the client's
 body, so that we can report exactly what we were waiting for.

 I could check if it's easier to implement now. A first step could be
 to disable the server-side timeout as long as we're waiting for the
 client. That might do the trick. Probably that you could already check
 for this using a slightly larger server timeout than the client's (eg:
 21s for the server, 20s for the client). If that works, it would
 confirm that we could do this by just disabling the server timeout
 in this situation.
Seems like it's not going to be that simple. We currently have the
server timeout set at 170s, and the client timeout at 60s (and the
connection closes with 504 sH-- after 170s). Though this does seem like
it'd be the right approach; if the client hasn't finished sending all
it's data, the client timeout should kick in.

-Patrick


Re: AW: GA Release of 1.5

2013-09-24 Thread Patrick Hemmer
 the development version of the software has some big feature
isn't a valid argument that the development branch should be released.
There are lots of other big features in the development branch, we can't
have a release for every one of them.
  Can't have all the features in stable branches right away and then
  expect those branches to be supported for years to come, imho.

 Point taken and certainly an important consideration.

 Best regards,
 Jinn
Personally there are still several things that have been talked about
for 1.5 that I am still hoping to see. So I for one hope that 1.5 is not
released until it is finished.

-Patrick



signature.asc
Description: OpenPGP digital signature


Re: Client timeout on http put shows as a server timeout with error 504

2013-09-30 Thread Patrick Hemmer
*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2013-09-18 10:26:36 E
*To: *haproxy@formilux.org
*Subject: *Re: Client timeout on http put shows as a server timeout with
error
504

 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2013-09-18 01:46:50 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *haproxy@formilux.org haproxy@formilux.org
 *Subject: *Re: Client timeout on http put shows as a server timeout
 with error 504

 Hi Patrick,

 On Tue, Sep 17, 2013 at 06:29:13PM -0400, Patrick Hemmer wrote:
 We have this case with haproxy 1.5-dev19 where when a client is
 uploading data via a HTTP PUT request, the client will fail to send all
 it's data and haproxy will timeout the connection. The problem is that
 haproxy is reporting this an error 504 and connection flags of sH--,
 meaning it timed out waiting for the server.

 Now I've analyzed the http headers, and the PUT request has a
 content-length header, so would it be possible to have haproxy report
 these as a client side timeout instead of a server side timeout (when
 the amount of data after headers is less than the amount indicated in
 the content-length header)? And with a 4XX status code as well.
 We have monitoring in place which looks for server errors, and I'd love
 for it not to pick up client problems.
 I remember having checked for this in the past. I agree that ideally
 we should have a cH--. It's a bit trickier, because in practice it
 is permitted for the server to respond before the end. In fact we'd
 need another state before the Headers state, which is the client's
 body, so that we can report exactly what we were waiting for.

 I could check if it's easier to implement now. A first step could be
 to disable the server-side timeout as long as we're waiting for the
 client. That might do the trick. Probably that you could already check
 for this using a slightly larger server timeout than the client's (eg:
 21s for the server, 20s for the client). If that works, it would
 confirm that we could do this by just disabling the server timeout
 in this situation.
 Seems like it's not going to be that simple. We currently have the
 server timeout set at 170s, and the client timeout at 60s (and the
 connection closes with 504 sH-- after 170s). Though this does seem
 like it'd be the right approach; if the client hasn't finished sending
 all it's data, the client timeout should kick in.

 -Patrick

I'm also seeing a lot of connections being closed by the client and
showing up as 503 (connection flags are CC--), and in my opinion the
client closing the connection shouldn't be a 500 level error.
In this specific case, nginx uses code 499. Perhaps haproxy should adopt
this code as well.

For this and the timeout issue, if this isn't something that will be
fixed any time soon, I'm willing to try and dig into it myself. However
I don't know the haproxy source, so it will likely take me quite some time.

-Patrick


handling hundreds of reqrep statements

2013-10-22 Thread Patrick Hemmer
I'm currently using haproxy (1.5-dev19) as a content based router. It
takes an incoming request, looks at the url, rewrites it, and sends it
on to the appropriate back end.
The difficult part is that we need to all parsing and rewriting after
the first match. This is because we might have a url such as '/foo/bar'
which rewrites to '/foo/baz', and another rewrite from '/foo/b' to
'/foo/c'. As you can see both rules would try to trigger a rewrite on
'/foo/bar/shot', and we'd end up with '/foo/caz/shot'.
Additionally there are hundreds of these rewrites (the config file is
generated from a mapping).

There are 2 questions here:

1) I currently have this working using stick tables (it's unpleasant but
it works).
It basically looks like this:
frontend frontend1
acl foo_bar path_reg ^/foo/bar
use_backend backend1 if foo_bar

acl foo_b path_reg ^/foo/b
use_backend backend1 if foo_b

backend backend1
stick-table type integer size 1 store gpc0 # create a stick table to
store one entry
tcp-request content track-sc1 always_false # enable tracking on sc1.
The `always_false` doesn't matter, it just requires a key, so we give it one
acl rewrite-init sc1_clr_gpc0 ge 0 # ACL to clear gpc0
tcp-request content accept if rewrite-init # clear gpc0 on the start
of every request
acl rewrite-empty sc1_get_gpc0 eq 0 # ACL to check if gpc0 has been set
acl rewrite-set sc1_inc_gpc0 ge 0 # ACL to set gpc0 when a rewrite
has matched

acl foo_bar path_reg ^/foo/bar
reqrep ^(GET|POST)\ /foo/bar(.*) \1\ /foo/baz\2 if rewrite-empty
foo_bar rewrite-set # the conditional first checks if another rewrite
has matched, then checks the foo_bar acl, and then performs the
rewrite-set only if foo_bar matched

acl foo_b path_reg ^/foo/b
reqrep ^(GET|POST)\ /foo/b(.*) \1\ /foo/c\2 if rewrite-empty foo_b
rewrite-set # same procedure as above

(my actual rules are a bit more complicated, but those examples exhibit
all the problem points I have).

The cleaner way I thought of handling this was to instead do something
like this:
backend backend1
acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

acl foo_bar path_reg ^/foo/bar
reqrep ^(GET|POST)\ /foo/bar(.*) \1\ /foo/baz\2\r\nX-Rewrite-ID:\
foo_bar if !rewrite-found foo_bar

acl foo_b path_reg ^/foo/b
reqrep ^(GET|POST)\ /foo/b(.*) \1\ /foo/c\2\r\nX-Rewrite-ID:\ foo_b
if !rewrite-found foo_b

But this doesn't work. The rewrite-found acl never finds the header and
so both reqrep commands run. Is there any better way of doing this than
the nasty stick table?


2) I would also like to add a field to the log indicating which rule
matched. I can't figure out a way to accomplish this bit.
Since the config file is automatically generated, I was hoping to just
assign a short numeric ID and stick that in the log somehow. The only
way I can think that this could work is by adding a header conditionally
using an acl (or use the header created by the alternate idea above),
and then using `capture request header` to add that to the log. But it
does not appear haproxy can capture headers added by itself.

-Patrick


Re: handling hundreds of reqrep statements

2013-10-22 Thread Patrick Hemmer



*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2013-10-22 19:13:08 E
*To: *haproxy@formilux.org
*Subject: *handling hundreds of reqrep statements

 I'm currently using haproxy (1.5-dev19) as a content based router. It
 takes an incoming request, looks at the url, rewrites it, and sends it
 on to the appropriate back end.
 The difficult part is that we need to all parsing and rewriting after
 the first match. This is because we might have a url such as
 '/foo/bar' which rewrites to '/foo/baz', and another rewrite from
 '/foo/b' to '/foo/c'. As you can see both rules would try to trigger a
 rewrite on '/foo/bar/shot', and we'd end up with '/foo/caz/shot'.
 Additionally there are hundreds of these rewrites (the config file is
 generated from a mapping).

 There are 2 questions here:

 1) I currently have this working using stick tables (it's unpleasant
 but it works).
 It basically looks like this:
 frontend frontend1
 acl foo_bar path_reg ^/foo/bar
 use_backend backend1 if foo_bar

 acl foo_b path_reg ^/foo/b
 use_backend backend1 if foo_b

 backend backend1
 stick-table type integer size 1 store gpc0 # create a stick table
 to store one entry
 tcp-request content track-sc1 always_false # enable tracking on
 sc1. The `always_false` doesn't matter, it just requires a key, so we
 give it one
 acl rewrite-init sc1_clr_gpc0 ge 0 # ACL to clear gpc0
 tcp-request content accept if rewrite-init # clear gpc0 on the
 start of every request
 acl rewrite-empty sc1_get_gpc0 eq 0 # ACL to check if gpc0 has
 been set
 acl rewrite-set sc1_inc_gpc0 ge 0 # ACL to set gpc0 when a rewrite
 has matched

 acl foo_bar path_reg ^/foo/bar
 reqrep ^(GET|POST)\ /foo/bar(.*) \1\ /foo/baz\2 if rewrite-empty
 foo_bar rewrite-set # the conditional first checks if another rewrite
 has matched, then checks the foo_bar acl, and then performs the
 rewrite-set only if foo_bar matched

 acl foo_b path_reg ^/foo/b
 reqrep ^(GET|POST)\ /foo/b(.*) \1\ /foo/c\2 if rewrite-empty foo_b
 rewrite-set # same procedure as above

 (my actual rules are a bit more complicated, but those examples
 exhibit all the problem points I have).

 The cleaner way I thought of handling this was to instead do something
 like this:
 backend backend1
 acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

 acl foo_bar path_reg ^/foo/bar
 reqrep ^(GET|POST)\ /foo/bar(.*) \1\ /foo/baz\2\r\nX-Rewrite-ID:\
 foo_bar if !rewrite-found foo_bar

 acl foo_b path_reg ^/foo/b
 reqrep ^(GET|POST)\ /foo/b(.*) \1\ /foo/c\2\r\nX-Rewrite-ID:\
 foo_b if !rewrite-found foo_b

 But this doesn't work. The rewrite-found acl never finds the header
 and so both reqrep commands run. Is there any better way of doing this
 than the nasty stick table?


 2) I would also like to add a field to the log indicating which rule
 matched. I can't figure out a way to accomplish this bit.
 Since the config file is automatically generated, I was hoping to just
 assign a short numeric ID and stick that in the log somehow. The only
 way I can think that this could work is by adding a header
 conditionally using an acl (or use the header created by the alternate
 idea above), and then using `capture request header` to add that to
 the log. But it does not appear haproxy can capture headers added by
 itself.

 -Patrick

Ok, so I went home and resumed trying to figure this out, starting from
scratch on a whole new machine. Well guess what, the cleaner way
worked. After many proclamations of WTF? out loud (my dog was getting
concerned), I think I found a bug. And I cannot begin to describe just
how awesome this bug is.

Here's how you can duplicate this awesomeness:

Start a haproxy with the following config:
defaults
mode http
timeout connect 1000
timeout client 1000
timeout server 1000

frontend frontend
bind *:2082

maxconn 2

  acl rewrite-found req.hdr(X-Header-ID) -m found

reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ bar if
!rewrite-found
reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ pop if
!rewrite-found
reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ tart if
!rewrite-found

default_backend backend

backend backend
server server 127.0.0.1:2090



Start up a netcat:
while true; do nc -l -p 2090; done


Create a file with the following contents (I'll presume we call it data):
GET /foo/ HTTP/1.1
Accept: */*
User-Agent: Agent
Host: localhost:2082


(with the empty line on the bottom)

And now run:
nc localhost2082  data

In your listening netcat, notice you got 3 X-Header-ID headers.

Now in your data file, move the Accept: */* down one line, so it's
after the User-Agent and retry. Notice you only get 1 X-Header-ID
back. It works!

But wait, it gets even better. Put the Accept: */* line back where it
was, and in the haproxy config, replace all X-Header-ID with
X-HeaderID (just remove

Re: handling hundreds of reqrep statements

2013-10-23 Thread Patrick Hemmer
 



*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2013-10-22 23:32:31 E
*CC: *haproxy@formilux.org
*Subject: *Re: handling hundreds of reqrep statements



 
 *From: *Patrick Hemmer hapr...@stormcloud9.net
 *Sent: * 2013-10-22 19:13:08 E
 *To: *haproxy@formilux.org
 *Subject: *handling hundreds of reqrep statements

 I'm currently using haproxy (1.5-dev19) as a content based router. It
 takes an incoming request, looks at the url, rewrites it, and sends
 it on to the appropriate back end.
 The difficult part is that we need to all parsing and rewriting after
 the first match. This is because we might have a url such as
 '/foo/bar' which rewrites to '/foo/baz', and another rewrite from
 '/foo/b' to '/foo/c'. As you can see both rules would try to trigger
 a rewrite on '/foo/bar/shot', and we'd end up with '/foo/caz/shot'.
 Additionally there are hundreds of these rewrites (the config file is
 generated from a mapping).

 There are 2 questions here:

 1) I currently have this working using stick tables (it's unpleasant
 but it works).
 It basically looks like this:
 frontend frontend1
 acl foo_bar path_reg ^/foo/bar
 use_backend backend1 if foo_bar

 acl foo_b path_reg ^/foo/b
 use_backend backend1 if foo_b

 backend backend1
 stick-table type integer size 1 store gpc0 # create a stick table
 to store one entry
 tcp-request content track-sc1 always_false # enable tracking on
 sc1. The `always_false` doesn't matter, it just requires a key, so we
 give it one
 acl rewrite-init sc1_clr_gpc0 ge 0 # ACL to clear gpc0
 tcp-request content accept if rewrite-init # clear gpc0 on the
 start of every request
 acl rewrite-empty sc1_get_gpc0 eq 0 # ACL to check if gpc0 has
 been set
 acl rewrite-set sc1_inc_gpc0 ge 0 # ACL to set gpc0 when a
 rewrite has matched

 acl foo_bar path_reg ^/foo/bar
 reqrep ^(GET|POST)\ /foo/bar(.*) \1\ /foo/baz\2 if rewrite-empty
 foo_bar rewrite-set # the conditional first checks if another rewrite
 has matched, then checks the foo_bar acl, and then performs the
 rewrite-set only if foo_bar matched

 acl foo_b path_reg ^/foo/b
 reqrep ^(GET|POST)\ /foo/b(.*) \1\ /foo/c\2 if rewrite-empty
 foo_b rewrite-set # same procedure as above

 (my actual rules are a bit more complicated, but those examples
 exhibit all the problem points I have).

 The cleaner way I thought of handling this was to instead do
 something like this:
 backend backend1
 acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

 acl foo_bar path_reg ^/foo/bar
 reqrep ^(GET|POST)\ /foo/bar(.*) \1\ /foo/baz\2\r\nX-Rewrite-ID:\
 foo_bar if !rewrite-found foo_bar

 acl foo_b path_reg ^/foo/b
 reqrep ^(GET|POST)\ /foo/b(.*) \1\ /foo/c\2\r\nX-Rewrite-ID:\
 foo_b if !rewrite-found foo_b

 But this doesn't work. The rewrite-found acl never finds the header
 and so both reqrep commands run. Is there any better way of doing
 this than the nasty stick table?


 2) I would also like to add a field to the log indicating which rule
 matched. I can't figure out a way to accomplish this bit.
 Since the config file is automatically generated, I was hoping to
 just assign a short numeric ID and stick that in the log somehow. The
 only way I can think that this could work is by adding a header
 conditionally using an acl (or use the header created by the
 alternate idea above), and then using `capture request header` to add
 that to the log. But it does not appear haproxy can capture headers
 added by itself.

 -Patrick

 Ok, so I went home and resumed trying to figure this out, starting
 from scratch on a whole new machine. Well guess what, the cleaner
 way worked. After many proclamations of WTF? out loud (my dog was
 getting concerned), I think I found a bug. And I cannot begin to
 describe just how awesome this bug is.

 Here's how you can duplicate this awesomeness:

 Start a haproxy with the following config:
 defaults
 mode http
 timeout connect 1000
 timeout client 1000
 timeout server 1000

 frontend frontend
 bind *:2082

 maxconn 2

   acl rewrite-found req.hdr(X-Header-ID) -m found

 reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ bar if
 !rewrite-found
 reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ pop if
 !rewrite-found
 reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ tart if
 !rewrite-found

 default_backend backend

 backend backend
 server server 127.0.0.1:2090



 Start up a netcat:
 while true; do nc -l -p 2090; done


 Create a file with the following contents (I'll presume we call it
 data):
 GET /foo/ HTTP/1.1
 Accept: */*
 User-Agent: Agent
 Host: localhost:2082


 (with the empty line on the bottom)

 And now run:
 nc localhost2082  data

 In your listening netcat, notice you got 3 X-Header-ID headers.

 Now in your data file

Re: handling hundreds of reqrep statements

2013-10-23 Thread Patrick Hemmer
 



*From: *hushmeh...@hushmail.com
*Sent: * 2013-10-23 01:06:24 E
*To: *hapr...@stormcloud9.net
*CC: *haproxy@formilux.org
*Subject: *Re: handling hundreds of reqrep statements


 On Wed, 23 Oct 2013 05:33:38 +0200 Patrick Hemmer 
 hapr...@stormcloud9.net wrote:
reqrep ^(GET)\ /foo/(.*) \1\ /foo/\2\r\nX-Header-ID:\ bar if
 !rewrite-found
 What about reqadd? Clumsy fiddling with \r\n (or \n\r) in regexp 
 seems awkward to me.
 reqadd X-Header-ID:\ bar unless rewrite-found

Ya, I think I figured out the issue. Had to do with haproxy
pre-allocating buffers for each header, and not expecting them being
moved around.
Unfortunately I can't use reqadd to add a header as reqadd happens too
late in the process. All reqrep statements happen before reqadd. So if I
put an acl on reqrep to skip it if the header has been added, it'll
always run the reqrep because the header gets added afterwards.
However I think I can use http-request set-header instead of reqadd.
It's not as simple as the reqrep \r\n idea, but still better than the
nasty stick table.


disable backend through socket

2013-12-20 Thread Patrick Hemmer
Simple question: Is there any way to disable a backend through the socket?
I see you can disable both frontends, and servers through the socket,
but I don't see a way to do a backend.

-Patrick


Re: disable backend through socket

2013-12-22 Thread Patrick Hemmer
No. As I said, I want to disable the backend.
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-disabled


-Patrick



*From: *Jonathan Matthews cont...@jpluscplusm.com
*Sent: * 2013-12-22 16:23:18 E
*To: *haproxy@formilux.org
*Subject: *Re: disable backend through socket

 On 22 Dec 2013 20:32, Patrick Hemmer hapr...@stormcloud9.net
 mailto:hapr...@stormcloud9.net wrote:
 
  That disables a server. I want to disable a backend.

 No, you want to disable all the servers in a backend. I'm not sure
 there's a shortcut that's better than just doing them one by one.
 Others may be able to advise about alternatives, but is that an option
 for you?

 Jonathan




Re: disable backend through socket

2013-12-23 Thread Patrick Hemmer
 On Sun, Dec 22, 2013 at 05:05:16PM -0500, Patrick Hemmer wrote:
 No. As I said, I want to disable the backend.
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-disabled
 That doesn't really work for backends since they don't decide to get
 traffic. At least if a config accepts to start with the disabled
 keyword in a backend and this backend is referenced in a frontend, I
 have no idea what it does behind the scenes. I'm not even sure the
 backend is completely initialized.

Ah, ok. I can live with that :-)

 What do you want to do exactly ? Do you just want to disable the
 health checks ? It's unclear what result you're seeking in fact.

I was just looking to disable backends without restarting the service.
Nothing more. Nothing less.
Currenly when I want to disable a backend I just update the config and
reload haproxy. Not a big deal. Was just hoping that since frontends and
servers could both be enabled/disabled through the socket, that backends
could too.

The reason why I don't want to disable individual servers is that we
have an automated process which enables  disables servers. If a backend
is disabled, then I don't want a server to automatically get enabled and
start taking traffic. By disabling the backend, we prevent this scenario.

 Willy

Thank you

-Patrick


Re: disable backend through socket

2013-12-26 Thread Patrick Hemmer
*From: *Gabriel Sosa sosagabr...@gmail.com
*Sent: * 2013-12-26 09:41:21 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org
*Subject: *Re: disable backend through socket




 On Mon, Dec 23, 2013 at 12:21 PM, Patrick Hemmer
 hapr...@stormcloud9.net mailto:hapr...@stormcloud9.net wrote:

 On Sun, Dec 22, 2013 at 05:05:16PM -0500, Patrick Hemmer wrote:
 No. As I said, I want to disable the backend.
 
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-disabled
 That doesn't really work for backends since they don't decide to get
 traffic. At least if a config accepts to start with the disabled
 keyword in a backend and this backend is referenced in a frontend, I
 have no idea what it does behind the scenes. I'm not even sure the
 backend is completely initialized.

 Ah, ok. I can live with that :-)


 What do you want to do exactly ? Do you just want to disable the
 health checks ? It's unclear what result you're seeking in fact.

 I was just looking to disable backends without restarting the
 service. Nothing more. Nothing less.
 Currenly when I want to disable a backend I just update the config
 and reload haproxy. Not a big deal. Was just hoping that since
 frontends and servers could both be enabled/disabled through the
 socket, that backends could too.

 The reason why I don't want to disable individual servers is that
 we have an automated process which enables  disables servers. If
 a backend is disabled, then I don't want a server to automatically
 get enabled and start taking traffic. By disabling the backend, we
 prevent this scenario.

 Willy

 Thank you

 -Patrick



 Patrick,

 did you take a look to the load balancer feedback feature? [1] I think
 this might help you.

 Saludos

 [1]
 http://blog.loadbalancer.org/open-source-windows-service-for-reporting-server-load-back-to-haproxy-load-balancer-feedback-agent/


I have seen this yes, but unfortunately it still operates on a
per-server basis. I would have to reach out to every server and tell the
feedback agent to advertise itself as in maintenance. The goal is to
be able to put the entire backend in maintenance, regardless of what the
status of the individual servers are.

This isn't that big of a deal. I currently have a haproxy controller
daemon which adjusts the haproxy.cfg (sets backend disable) and reloads.
I just like to avoid reloading as much as possible.

-Patrick


Re: Just a simple thought on health checks after a soft reload of HAProxy....

2014-01-21 Thread Patrick Hemmer

*From: *Malcolm Turnbull malc...@loadbalancer.org
*Sent: * 2014-01-14 07:13:27 E
*To: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Just a simple thought on health checks after a soft reload of
HAProxy

 Just a simple though on health checks after a soft reload of HAProxy

 If for example you had several backend servers one of which had crashed...
 Then you make make a configuration change to HAProxy and soft reload,
 for instance adding a new backend server.

 All the servers are instantly brought up and available for traffic
 (including the crashed one).
 So traffic will possibly be sent to a broken server...

 Obviously its only a small problem as it is fixed as soon as the
 health check actually runs...

 But I was just wondering is their a way of saying don't bring up a
 server until it passes a health check?
I was just thinking of this issue myself and google turned up your post.
Personally I would not like that every server is considered down until
after the health checks pass. Basically this would result in things
being down after a reload, which defeats the point of the reload being
non-interruptive.

I can think of 2 possible solutions:
1) When the new process comes up, do an initial check on all servers
(just one) which have checks enabled. Use that one check as the verdict
for whether each server should be marked 'up' or 'down'. After each
server has been checked once, then signal the other process to shut down
and start listening.
2) Use the stats socket (if enabled) to pull the stats from the previous
process. Use its health check data to pre-populate the health data of
the new process. This one has a few drawbacks though. The server 
backend names must match between the old and new config, and the stats
socket has to be enabled. It would probably be harder to code as well,
but I really don't know on that.

-Patrick


determine size of http headers

2014-01-23 Thread Patrick Hemmer
What I'd like to do is add a few items to the log line which contain the
size of the headers, and then the value of the Content-Length header.
This way if the connection is broken for any reason, we can determine if
the client sent all the data they were supposed to.

Logging the Content-Length header is easy, but I can't find a way to get
the size of the headers.
The only way that pops into mind is to look for the first occurrence of
\r\n\r\n and get its offset (and preferably add 4 as to include the size
of the \r\n\r\n in the calculation). But I don't see a way to accomplish
this.

Any ideas?

Thanks

-Patrick


Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
This patch does appear to have solved the issue reported, but it
introduced another.
If I use `http-request add-header` with %rt in the value to add the
request ID, and then I also use it in `unique-id-format`, the 2 settings
get different values. the value used for`http-request add-header` will
be one less than the value used for `unique-id-format` (this applies to
both using %ID in the log format and using `unique-id-header`).

Without this patch, all values are the same.

-Patrick


*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2013-08-13 11:53:16 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 Hi Patrick,

 On Sun, Aug 11, 2013 at 03:45:36PM -0400, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter (the
 full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
 one specific case, haproxy added the same http_request_counter value to
 70 different http requests within a span of 61 seconds (from various
 client hosts too). Does the http_request_counter only increment under
 certain conditions, or is this a bug?
 Wow, congrats, you found a nice ugly bug! Here's how the counter is
 retrieved at the moment of logging :

   iret = snprintf(tmplog, dst + maxsize - tmplog, %04X, 
 global.req_count);

 As you can see, it uses a global variable which holds the global number of
 requests seen at the moment of logging (or assigning the header) instead of
 a unique value assigned to each request!

 So all the requests that are logged in the same time frame between two
 new requests get the same ID :-(

 The counter should be auto-incrementing so that each retrieval is unique.

 Please try with the attached patch.

 Thanks,
 Willy




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
Actually I sent that prematurely. The behavior is actually even simpler.
With `http-request add-header`, %rt is one less than when used in a
`log-format` or `unique-id-header`. I'm guessing incrementing the value
happens after `http-request` is processed, but before log-format or
unique-id-header.

-Patrick



*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2014-01-25 03:40:38 E
*To: *Willy Tarreau w...@1wt.eu
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 This patch does appear to have solved the issue reported, but it
 introduced another.
 If I use `http-request add-header` with %rt in the value to add the
 request ID, and then I also use it in `unique-id-format`, the 2
 settings get different values. the value used for`http-request
 add-header` will be one less than the value used for
 `unique-id-format` (this applies to both using %ID in the log format
 and using `unique-id-header`).

 Without this patch, all values are the same.

 -Patrick

 
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2013-08-13 11:53:16 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *haproxy@formilux.org haproxy@formilux.org
 *Subject: *Re: haproxy duplicate http_request_counter values

 Hi Patrick,

 On Sun, Aug 11, 2013 at 03:45:36PM -0400, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter (the
 full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
 one specific case, haproxy added the same http_request_counter value to
 70 different http requests within a span of 61 seconds (from various
 client hosts too). Does the http_request_counter only increment under
 certain conditions, or is this a bug?
 Wow, congrats, you found a nice ugly bug! Here's how the counter is
 retrieved at the moment of logging :

   iret = snprintf(tmplog, dst + maxsize - tmplog, %04X, 
 global.req_count);

 As you can see, it uses a global variable which holds the global number of
 requests seen at the moment of logging (or assigning the header) instead of
 a unique value assigned to each request!

 So all the requests that are logged in the same time frame between two
 new requests get the same ID :-(

 The counter should be auto-incrementing so that each retrieval is unique.

 Please try with the attached patch.

 Thanks,
 Willy





Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-01-25 04:43:28 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 Hi Patrick,

 On Sat, Jan 25, 2014 at 03:40:38AM -0500, Patrick Hemmer wrote:
 This patch does appear to have solved the issue reported, but it
 introduced another.
 If I use `http-request add-header` with %rt in the value to add the
 request ID, and then I also use it in `unique-id-format`, the 2 settings
 get different values. the value used for`http-request add-header` will
 be one less than the value used for `unique-id-format` (this applies to
 both using %ID in the log format and using `unique-id-header`).
 You're damn right! I forgot this case where the ID could be used twice :-(

 So we have no other choice but copying the ID into the session or HTTP
 transaction, since it's possible to use it several times. At the same
 time, I'm wondering if we should not also increment it for new sessions,
 because for people who forward non-HTTP traffic, there's no unique counter.

 What I'm thinking about is the following then :

   - increment the global counter on each new session and store it into
 the session.
   - increment it again when dealing with a new request over an existing
 session.

 That way it would count each transaction, either TCP connection or HTTP
 request. And since the ID would be assigned to the session, it would
 remain stable for all the period where it's needed.

 What do you think ?

Sounds reasonable. Running through it in my head, I can't conjure up any
scenario where that approach wouldn't work.


-Patrick




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
Confirmed. Testing various scenarios, and they all work.

Thanks for the quick patch :-)

-Patrick


*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-01-25 05:09:09 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 On Sat, Jan 25, 2014 at 05:05:07AM -0500, Patrick Hemmer wrote:
 Sounds reasonable. Running through it in my head, I can't conjure up any
 scenario where that approach wouldn't work.
 Same here. And it works fine for me with the benefit of coherency
 between all reported unique IDs.

 I'm about to merge the attached patch, if you want to confirm that
 it's OK for you as well, feel free to do so :-)

 Willy




Re: Real client IP address question

2014-01-27 Thread Patrick Hemmer
You can use the proxy protocol for this. Haproxy doesn't allow
manipulation of the TCP stream itself as it could be any number of
protocols which haproxy doesn't support. However the proxy protocol
sends a line at the very beginning of the stream containing the client
source IP, port, destination,  destination port, then it starts sending
the data. As such, whatever you're sending to has to be capable of
handling the proxy protocol header (and be configured to do so).

See
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-send-proxy
and http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt

-Patrick



*From: *Semenov, Evgeny ev.seme...@brokerkf.ru
*Sent: * 2014-01-27 09:06:59 E
*To: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Real client IP address question

 Hi,

  

 There is a setting('forward for' option)in haproxy allowing to forward
 the traffic with the real client IP address to the end server. This
 setting works only for HTTP traffic. Is there a way to make a similar
 setting for TCP?

 I run haproxy on Linux OS.

  

  

  

 Best regards,

 Evgeny Semenov

  




capture.req.hdr

2014-02-06 Thread Patrick Hemmer
I really like this feature, and it was something actually on my todo
list of things to look into adding to haproxy.
However there is one thing I would consider supporting. Instead of
requiring the index of the capture keyword in the config, which is very
cumbersome and awkward in my opinion, support using the header name.

Now I imagine the immediate response to this is going to be that this
would require searching for the header by name every time
capture.req.hdr is used, and the captured headers are stored in a simple
array not maintaining the header names. This would complicate the code
and possibly slow haproxy down.
But, an alternate idea would be to transform the header name into its
index at the time of parsing configuration. This would let the user use
a header name, but the actual haproxy code which translates
capture.req.hdr wouldn't change at all.
It would be a lot less fragile when someone updates their config to
capture an additional header, but forgets to update all indexes (plus
having to keep track of indexes in the first place).


-Patrick


503 errors from HTTP statistics proxy

2014-02-20 Thread Patrick Landry

I am running HAProxy version 1.5-dev16 to load balance traffic to a pair of web 
servers. That part of the service is running great. I attempted to add a proxy 
to serve the HTTP statistics page and am receiving 503 Service Unavailable 
messages. (I also run a separate instance of HAProxy version 1.4.8 on which the 
statistics page is working fine.) I have tried several different 
configurations. Here is what gets logged when I attempt to access the 
statistics page. 



Feb 20 11:23:14 localhost haproxy[9350]: 1.2.3.4:54080 
[20/Feb/2014:11:23:14.616] adminstats adminstats/NOSRV 1/-1/-1/-1/+1 503 +212 
SC-- 15/0/0/0/0 0/0 GET /haproxy_stats HTTP/1.1 


Feb 20 11:23:14 localhost haproxy[9350]: 1.2.3.4:54081 
[20/Feb/2014:11:23:14.867] adminstats adminstats/NOSRV 0/-1/-1/-1/+0 503 +212 
SC-- 22/0/0/0/0 0/0 GET /favicon.ico HTTP/1.1 


After adding the listen section for the HTTP statistics proxy I reloaded 
HAProxy using the -sf argument rather than doing a cold start. (I wanted to 
mention that in case it makes a difference somehow.) 


Below is my current configuration. I suspect it is something in my 
configuration which is causing the problem but I am stumped. Any help would be 
appreciated. 


global 
log 127.0.0.1 local0 
log 127.0.0.1 local1 notice 
maxconn 8192 
user haproxy 
group haproxy 
daemon 
spread-checks 5 
stats socket /var/run/haproxy/haproxy.sock mode 0600 level admin 


defaults 
mode http 
log global 
option httplog 
option dontlognull 
option logasap 
retries 3 
option redispatch 
maxconn 8192 
timeout connect 3 
timeout client 3 
timeout server 3 
option contstats 


frontend https-in 
option httplog 
option httpclose 
option forwardfor 
reqadd X-Forwarded-Proto:\ https 


bind 1.2.3.4:443,5.6.7.8:443 ssl crt /.../cert.pem ca-file /.../bundle.pem 


default_backend servers 


frontend http-in 
option httplog 
option httpclose 
option forwardfor 
reqadd X-Forwarded-Proto:\ http 


bind 1.2.3.4:80,5.6.7.8:80 


default_backend servers 


backend servers 
option httpchk 
option httplog 
option httpclose 
balance roundrobin # Load Balancing algorithm 
cookie COOKIE insert indirect nocache 


server one 9.10.11.12:80 cookie ONE weight 10 maxconn 1024 check 
server two 9.10.11.13:80 cookie TWO weight 10 maxconn 1024 check 
server three 9.10.11.14:80 cookie THREE weight 0 maxconn 1024 check disabled 


listen adminstats 0.0.0.0:8080 
mode http 
balance 
timeout client 5000 
timeout connect 4000 
timeout server 3 
stats uri /haproxy_stats 
stats realm HAProxy\ Statistics 
stats auth user:password 






-- 

patrick 

Patrick Landry 
University of Louisiana at Lafayette 
Director, University Computer Support Services 



Re: 503 errors from HTTP statistics proxy

2014-02-20 Thread Patrick Landry


- Original Message -


Hi Patrick, 

I think your listen adminstats would be glad to have a 'stats enable' 
statement! 

Baptiste 



Thanks but that does not fix it. I had that included at one point. I have been 
through so many configurations 

listen adminstats 0.0.0.0:8080 
mode http 
balance 
timeout client 5000 
timeout connect 4000 
timeout server 3 
stats enable 
stats uri /haproxy_stats 
stats realm HAProxy\ Statistics 
stats auth user:password 

Log message: 

Feb 20 17:02:31 localhost haproxy[14974]: 1.2.3.4:43649 
[20/Feb/2014:17:02:31.367] adminstats adminstats/NOSRV 3/-1/-1/-1/+3 503 +212 
SC-- 12/1/0/0/0 0/0 GET /haproxy_stats HTTP/1.1 

blockquote

/blockquote
-- 

patrick 

Patrick Landry 
University of Louisiana at Lafayette 
Director, University Computer Support Services 

BEGIN:VCARD
VERSION:3.0
FN:Landry\, Patrick
N:Landry;Patrick;;;
ADR;TYPE=work,postal,parcel:;;PO Box 42770;Lafayette;LA;70504;US
TEL;TYPE=work,voice:+1 337 482 6402
EMAIL;TYPE=internet:p...@louisiana.edu
ORG:University of Louisiana at Lafayette
TITLE:Director\, UCSS
PHOTO;ENCODING=B;TYPE=JPEG; NAME=IMAGE:
 /9j/4AAQSkZJRgABAQEBLAEsAAD/4QAWRXhpZgAATU0AKggAAAD/2wBDAFA3PEY8MlBG
 QUZaVVBfeMiCeG5uePWvuZHI
 2wBDAVVaWnhpeOuCguv/
 wAARCAJuAm4DASIAAhEBAxEB/8QAGAABAQEBAQECAwT/
 xAAqEAEBAAIBBAICAQQDAQEAAQIRIQMSMUEyUWFxIhNSgZEEQqFiM//EABYBAQEB
 AAABAv/EABcRAQEBAQABESH/2gAMAwEAAhEDEQA/AOHftZdua43k
 Hbs34WY2MzO6amVoNtRJGpdIKh3JsGmadyXIGM7pzuR1Lywo1axlktvDnQAAFt2iwBrFlrEGkFQb
 6XleodL2dT0K5qig1PCpFAAABQQABUUECmwUTYCgAAA1i0zi0glebKbyr03w
 818rBNQ1AVDQNYY9wMuvT8Vzs1dOnT+IOJEAb21MtOQDv/VT+r+XE1Qdv6qf1XPVNUG/
 6qXqVntp20FuVqbXtO0GRrtO0GUb7TtBhY3o0DK4rogCoIOvS8HU8r0viz1PIrCooNTwqRQAAAZu
 U9A0Md0172Td9g3tNpNb5tv6NfQLstTVN2ccAbTvN/bP8Qb3vylm0li2fVEZ5nsmWX2u9mtAbq91
 hOfWjn1yKd53p+4cA3M2p1I49t9GrBHeZy+2pZXmXYr0jzzqZT23Or9g6jMylaQAATLxXlvl
 6c/jXmWAAqHlq42L05y3l4BxCgAAAegB16XiuWnXp/Ggxl8q6dP4uV8uuHxgOXbDtigJqLwggogC
 iAKACBbyQAAHfp/FjqfJvp/Fzz+VFQRQaVNpctQGuIlyrGOV3421b+BGLdk+onDct1qT
 UBNSeb/hdRLZ+1m7+FDx+Eta48bZuvtFTR75RdS+VRe3hO2fa9n0as4BnRNrw1Jr8gmjTXk3uIM/
 tqbZtvhJbAau2a15jN0CL3XFPDXGUA7pfMSpZpNirysv2s39JYI1+msepry5xrcB3l2OEtnh1xzl
 FbEVAEAZ6nxrzu/V+LgsABUb6d5b1HGXTVyugZvkAAABvpz2w1hloG7yzMu2aW5ac7d0FdsfjHGO
 3/WA5CAAAKgICiAoAJZsgQBUAd8PjHPL5V1x+Ljl5FQtL4THG0Gt6nljnKt3GfbG+eFR
 uSzybx9zaST/ALXRMsZ/12C/qaS2zzTut8TS6t9IrMvPLXd6TtOZ4VDV+11v9sd993a7l8wFuFWY
 yeaY5ye6v8cgTL8VjmNb1eKa37FZ39rqa3KlhLYI1N+qvdfpJZZ+Td9gXn9pv7LeeDcqC7PKWIBZ
 pqXbOz9UG9b/AGzZ+Du15a3KCS64KWXzEBP0bU19gLEAdcc/VdJXmbxz1wDsJMttIrn1b/Fwdut4
 cVhQBUANABpdAgujtoINdqZY6gIMtAs8u18OOPyjtkDiKgCiAAoIqKgigCBQFEWeQd58
 XG811t4crlrwKmVk/bNySmlRd2tThnwu56lAyk3521JJGJjavaCzz5dOJ4cuy3xGsZPd0C5WuddN
 S+01f7kHPVNavMW437TSjU/TVx1Nsxru4+wSyfembfpq6v4YsA2vLJKDU1+jn7SeTYG13ETQNLKz
 K1OQNJYsnq8LzPKKxv7ai2S+GZLKIvK736Kn6BqSbXLHhjdnmNzLc0DEqlhfE2CLr6QlFaxy7a64
 5Sxy8kvbRG+pNsdrpvZoVz7V7W9GhGO1e1vRoVjtXTWl0Ixo03pBWdMdTw3kxePIOeldZMco55TV
 0qLh8o6ZMdP5N5A5AAAIKCAogCgAAAgoCWkSxYAs8rJtuYyfsGMt+2PLWeXqMKLZo/YUF7vw
 T8stSS+aBv6XXBcZ/wBdQ5nv/wBBqTL0vbJ50591/ZZ+QbymE5l3WO7ley+jt1+wS5IXEm4BK6Yy
 acrOVxugMvLK27SgCANSm/wgCm4gCmlk34MfoFmXqtb/AMxikukG+36N64pLK1dZQE7d+GbjZTnG
 tY5b8isc+LE19OtjFlgiS74rVksZ3+FnPgGLuLK1kwDUXcqeE8g3jdX8Oku3CZN45aUdVZlaQQT2
 0KTyXyG1QnESXZfCYosK5dR1rl1PIJ07/JrqeWen8ms/KodPy6Xyx0vNay3sHIFBAAVBUBFQBRAU
 EBRNm4CjPdDuUaRO5O5BqtYzbE3a6yUDcx8M3uznHhq9s+V3fpm5cKMdqasal5NzYISWrrjZcvwB
 2tduMZlu+TutBq3XiOd5as43vykgErUuPvhNfZJsG/F4pL9seIm6DpWbExavHlBmcmtUl1dmV3eF
 Gcpyml9aICI1ZygIACmqi8g1zIvHnwxtZQa8po21LsGPDWNrXbvwzq4+UG7rKfliWyr64TyDUya8
 udmllBbPtnWm7zGLjRTaa5NEuhGvKWaN/TXmA52T7PC2JpRuZOsu4884dMMvQOk8qmKoAAoQASuP
 U+TtXHP5AvT8mfk6Zl5VG+l7aZ6fhqxYOIxupuoOmzccwG+6HdGAGu47mQF7qd1QA3TdAAABrHyy
 1gBUka7d1vHH2BhjfbrlrHDhn3tjK3KoMW7q+vwvb/pLVEXS42SflLQOE1zyVceaCVFAanhJ5JWt
 AtnP+Ewm8tNybjNmrtBqyXCz3HK+HTG83ftmzkVMbqrnzr7Zs1Wpz+1Rj2vb9GX2koNcWcs2iAum
 dNS+qWAyaXQCByvIIpqn+QU3WdqDW6sy41WP0oLvRYeSXXFQXe02uvaWbBqXXnwuvqsS7Xf2B21N
 NbZtBLNJLZWolijV5iTX2kuloJZvwkuq1OKZTjaDrhdxpy6V506gAAACpXDL5V2yunC+QdOmzfLW
 Hhn2qOvT+K7TD4iweYBAdMZqcubphNwDdtdPwzMZLr26TEEvMZ1I6a3+nLO+oipl
 lv8ASaPZb6iohTa8ASWknK48ZN2Axpe3hYTi/hFY16dMOZd+ks52v5gjXhLCWf4oKxeF3vmLYxP4
 5Auclm2JbG5eeUyghvcYq3inlRldllQGtb8G7GeV7qBuX8Bv8ICprlfJ4BAUE5XRsAWbPC7QN/aW
 ypUBvG6rVx3zHNvC+qozd7Xe28sd+Kx2/aBuzx4NxJbFs34BDYmlFs2S6WFnAIsNHhBL/G7jvjdx
 y1uNYXXAOis7IK0ipfAM5OHt1tT+IhOMWVt2ijtPimy8YOXdVg5gILI6TxqOcjrj
 AWcetNxnf5Zzy3x6Bc8vUYtPK8IqTwTH7sNbZqo3cZrezU+0n6axgLGrO6M6vouSKzzKWtWzL2zZ
 VRZlLNUvn8sePLU48UF/TXmM8X3qtS3xaCJeVv5TW0VOVl3NVLwkqotY1y3btiwG5fuJlrz5SVdg
 z5NLdGgZ01Mbraaaxsl3YCWaStb3lupdbBNUUk/AEh+Vy44PN48RFTSNICAASrvlFlVGt7mjt/LM
 0qDUkZs1R0s3jKDnyStprXkEkC0nhRA8iC4rPLK7B0jTnMtL3xRtL4Y70uaC5a7a5bavKaUFnlFx
 +UB2vhjTpfCaWQeUBB0w

Re: 503 errors from HTTP statistics proxy

2014-02-20 Thread Patrick Landry
- Original Message -
 From: Cyril Bonté cyril.bo...@free.fr
 To: Patrick Landry p...@louisiana.edu, Baptiste
 bed...@gmail.com
 Cc: HAProxy haproxy@formilux.org
 Sent: Thursday, February 20, 2014 5:32:25 PM
 Subject: Re: 503 errors from HTTP statistics proxy
 Hi Patrick,
 Le 21/02/2014 00:06, Patrick Landry a écrit :
  Thanks but that does not fix it. I had that included at one point. I
  have been through so many configurations
 The solution is to upgrade to haproxy-1.5-dev22 or the current
 snapshot ;-)
 There were regressions on the stats page in dev16 that were fixed in
 dev17.
 --
 Cyril Bonté
Thank you! -- patrick Patrick Landry University of Louisiana at Lafayette 
Director, University Computer Support Services

Re: Just a simple thought on health checks after a soft reload of HAProxy....

2014-02-22 Thread Patrick Hemmer
 



*From: *Sok Ann Yap sok...@gmail.com
*Sent: * 2014-02-21 05:11:48 E
*To: *haproxy@formilux.org
*Subject: *Re: Just a simple thought on health checks after a soft
reload of HAProxy

 Patrick Hemmer haproxy@... writes:

   From: Willy Tarreau w at 1wt.eu

   Sent:  2014-01-25 05:45:11 E

 Till now that's exactly what's currently done. The servers are marked
 almost dead, so the first check gives the verdict. Initially we had
 all checks started immediately. But it caused a lot of issues at several
 places where there were a high number of backends or servers mapped to
 the same hardware, because the rush of connection really caused the
 servers to be flagged as down. So we started to spread the checks over
 the longest check period in a farm. 

 Is there a way to enable this behavior? In my
 environment/configuration, it causes absolutely no issue that all
 the checks be fired off at the same time.
 As it is right now, when haproxy starts up, it takes it quite a
 while to discover which servers are down.
 -Patrick

 I faced the same problem in http://thread.gmane.org/
 gmane.comp.web.haproxy/14644

 After much contemplation, I decided to just patch away the initial spread 
 check behavior: https://github.com/sayap/sayap-overlay/blob/master/net-
 proxy/haproxy/files/haproxy-immediate-first-check.diff



I definitely think there should be an option to disable the behavior. We
have an automated system which adds and removes servers from the config,
and then bounces haproxy. Every time haproxy is bounced, we have a
period where it can send traffic to a dead server.


There's also a related bug on this.
The bug is that when I have a config with inter 30s fastinter 1s and
no httpchk enabled, when haproxy first starts up, it spreads the checks
over the period defined as fastinter, but the stats output says UP 1/3
for the full 30 seconds. It also says L4OK in 30001ms, when I know it
doesn't take the server 30 seconds to simply accept a connection.
Yet you get different behavior when using httpchk. When I add option
httpchk, it still spreads the checks over the 1s fastinter value, but
the stats output goes full UP immediately after the check occurs, not
UP 1/3. It also says L7OK/200 in 0ms, which is what I expect to see.

-Patrick



Re: Just a simple thought on health checks after a soft reload of HAProxy....

2014-02-24 Thread Patrick Hemmer
Unfortunately retry doesn't work in our case as we run haproxy on 2
layers, frontend servers and backend servers (to distribute traffic
among multiple processes on each server). So when an app on a server
goes down, the haproxy on that server is still up and accepting
connections, but the layer 7 http checks from the frontend haproxy are
failing. But since the backend haproxy is still accepting connections,
the retry option does not work.

-Patrick


*From: *Baptiste bed...@gmail.com
*Sent: * 2014-02-24 07:18:00 E
*To: *Malcolm Turnbull malc...@loadbalancer.org
*CC: *Neil n...@iamafreeman.com, Patrick Hemmer
hapr...@stormcloud9.net, HAProxy haproxy@formilux.org
*Subject: *Re: Just a simple thought on health checks after a soft
reload of HAProxy

 Hi Malcolm,

 Hence the retry and redispatch options :)
 I know it's a dirty workaround.

 Baptiste


 On Sun, Feb 23, 2014 at 8:42 PM, Malcolm Turnbull
 malc...@loadbalancer.org wrote:
 Neil,

 Yes, peers are great for passing stick tables to the new HAProxy
 instance and any current connections bound to the old process will be
 fine.
 However  any new connections will hit the new HAProxy process and if
 the backend server is down but haproxy hasn't health checked it yet
 then the user will hit a failed server.



 On 23 February 2014 10:38, Neil n...@iamafreeman.com wrote:
 Hello

 Regarding restarts, rather that cold starts, if you configure peers the
 state from before the restart should be kept. The new process haproxy
 creates is automatically a peer to the existing process and gets the state
 as was.

 Neil

 On 23 Feb 2014 03:46, Patrick Hemmer hapr...@stormcloud9.net wrote:



 
 From: Sok Ann Yap sok...@gmail.com
 Sent: 2014-02-21 05:11:48 E
 To: haproxy@formilux.org
 Subject: Re: Just a simple thought on health checks after a soft reload of
 HAProxy

 Patrick Hemmer haproxy@... writes:

   From: Willy Tarreau w at 1wt.eu

   Sent:  2014-01-25 05:45:11 E

 Till now that's exactly what's currently done. The servers are marked
 almost dead, so the first check gives the verdict. Initially we had
 all checks started immediately. But it caused a lot of issues at several
 places where there were a high number of backends or servers mapped to
 the same hardware, because the rush of connection really caused the
 servers to be flagged as down. So we started to spread the checks over
 the longest check period in a farm.

 Is there a way to enable this behavior? In my
 environment/configuration, it causes absolutely no issue that all
 the checks be fired off at the same time.
 As it is right now, when haproxy starts up, it takes it quite a
 while to discover which servers are down.
 -Patrick

 I faced the same problem in http://thread.gmane.org/
 gmane.comp.web.haproxy/14644

 After much contemplation, I decided to just patch away the initial spread
 check behavior: https://github.com/sayap/sayap-overlay/blob/master/net-
 proxy/haproxy/files/haproxy-immediate-first-check.diff



 I definitely think there should be an option to disable the behavior. We
 have an automated system which adds and removes servers from the config, 
 and
 then bounces haproxy. Every time haproxy is bounced, we have a period where
 it can send traffic to a dead server.


 There's also a related bug on this.
 The bug is that when I have a config with inter 30s fastinter 1s and no
 httpchk enabled, when haproxy first starts up, it spreads the checks over
 the period defined as fastinter, but the stats output says UP 1/3 for the
 full 30 seconds. It also says L4OK in 30001ms, when I know it doesn't 
 take
 the server 30 seconds to simply accept a connection.
 Yet you get different behavior when using httpchk. When I add option
 httpchk, it still spreads the checks over the 1s fastinter value, but the
 stats output goes full UP immediately after the check occurs, not UP
 1/3. It also says L7OK/200 in 0ms, which is what I expect to see.

 -Patrick



 --
 Regards,

 Malcolm Turnbull.

 Loadbalancer.org Ltd.
 Phone: +44 (0)870 443 8779
 http://www.loadbalancer.org/




Re: AW: Keeping statistics after a reload

2014-02-28 Thread Patrick Hemmer
I have seen feature requests in the past that when haproxy reloads, to
pull the health status of the servers so that haproxy knows their state
without having to health check them. Willy has said he liked the idea
(http://marc.info/?l=haproxym=139064677914723). If this gets
implemented, it would probably be a minor detail to not only dump the
up/down state, but all stats.

-Patrick




*From: *PiBa-NL piba.nl@gmail.com
*Sent: * 2014-02-28 11:15:19 E
*To: *Andreas Mock andreas.m...@drumedar.de, haproxy@formilux.org
haproxy@formilux.org
*Subject: *Re: AW: Keeping statistics after a reload

 Hi Andreas,

 Its not like your question was wrong, but probably there is no
 good/satisfying short answer to this, and it was overrun by other
 mails...

 As far as i know it is not possible to keep this kind information
 persisted in haproxy itself when a config restart is needed.

 The -sf only makes sure old connections will nicely be closed when
 they are 'done'.

 I have 'heard' of statistics gathering tools that use the haproxy unix
 stats socket to query the stats and store the information in a
 separate database that way you could get continued statistics after
 the config is changed.. I don't have any examples on how to do this or
 have a name of such a tool in mind though.. Though googling for
 haproxy monitoring quickly shows some commercial tools that have
 haproxy plugins and probably would provide answers to the questions
 you have.

 Maybe others on the list do use programs/scripts/tools to also keep
 historical/cumulative data for haproxy and can share their experience
 with it?

 Greets PiBa-NL

 Andreas Mock schreef op 28-2-2014 16:33:
 Hi all,

 the list is normally really responsive. In this case nobody
 gave an answer. So, I don't know whether my question was such a
 stupid one that nobody wanted to answer.

 So, I bring it up again in the hope someone is answering:
 Is there a way to reload the configuration without loosing
 current statistics? Or is this conceptually not possible?

 Best regards
 Andreas Mock

 -Ursprüngliche Nachricht-
 Von: Andreas Mock [mailto:andreas.m...@drumedar.de]
 Gesendet: Montag, 24. Februar 2014 16:36
 An: haproxy@formilux.org
 Betreff: Keeping statistics after a reload

 Hi all,

 is there a way to reload a haproxy config without resetting the
 statistics shown on the stats page?

 I used

 haproxy -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)

 to make such a reload. But after that all statistics are reset.

 Best regards
 Andreas Mock








Re: rewrite URI help

2014-03-04 Thread Patrick Hemmer
The haproxy log contains the original request, not the rewritten one. If
you want to see the rewritten URL you need to look at the backend server
which is receiving the request.

-Patrick



*From: *Steve Phillips stw...@gmail.com
*Sent: * 2014-03-04 19:54:44 E
*To: *HAProxy haproxy@formilux.org
*Subject: *rewrite URI help

 Trying to reverse proxy all requests to 

 /slideshare 

 to 

 www.slideshare.net/api/2/get_slideshow
 http://www.slideshare.net/api/2/get_slideshow

 my front-end config:

  acl url_slideshare   path_dir   slideshare
  use_backend slideshare if url_slideshare

 and back-end:

 backend slideshare
   option http-server-close
   option httpclose
   reqrep ^([^\ ]*)\ /slideshare(.*)  \1\ /api/2/get_slideshow\2
   server slideshare www.slideshare.net:443
 http://www.slideshare.net:443 ssl verify none

 requests to /slideshow however, are not being rewritten:

 173.11.67.214:60821 http://173.11.67.214:60821
 [04/Mar/2014:19:49:03.257] main slideshare/slideshare
 6142/0/289/121/6552 404 9299 - -  0/0/0/0/0 0/0 {} GET
 /slideshare?slideshow_url=http%3A%2F%2Fwww.slideshare.net
 http://2Fwww.slideshare.net%2FAaronKlein1%2Foptimizing-aws-economicsdetailed=1api_key=msCpLON8hash=a7fe5fd52cc86e4a4a3d1022cb7c63476b79e044ts=1393980574
 HTTP/1.1

 Is my regex incorrect?  Am I missing something else?  

 Thanks.

 Steve



tcp-request content track

2014-03-11 Thread Patrick Hemmer
2 related questions:

I'm trying to find a way to concat multiple samples to use in a stick table.
Basically in my frontend I pattern match on the request path to
determine which backend to send a request to. The client requests also
have a client ID header. I want to rate limit based on a combination of
this pattern that matched, and the client ID. Currently the way I do
this is an http-request set-header rule that adds a new header
combining a unique ID for the pattern that matched along with the
client-ID header. Then in the backend I have a tcp-requst content
track-sc2 on that header. This works, but I'm wondering if there's a
better way.


Secondly, the above works, but when I do a show table mybackend on the
stats socket, both the conn_cur and use counters never decrease.
They seem to be acting as the total number of requests, not the number
of active connections. Is this a bug, or am I misunderstanding something?


-Patrick


Re: tcp-request content track

2014-03-12 Thread Patrick Hemmer
Created a new config as an example. My existing config is huge, and hard
to read (generated programtically).

In regards to the bug, it appears it was a bug. I was using 1.5-dev19.
After upgrading to 1.5-dev22 it started behaving as expected.

Below is the config I'm using to accomplish what I want. As mentioned,
I'm basically rate limiting on a combination of the X-Client-Id header
and the matching URL. And as you can see, it's quite ugly and complex to
accomplish it :-(
For example, the same X-Client-Id should be able to hit /foo/bar 3 times
every 15 seconds, with only 1 open connection (the  rules). It
should be able to hit /asdf at 5 times every 15 seconds with 3 open
connections (the  rules).


global
log 127.0.0.1:514 local1 debug
maxconn 4096
daemon
stats socket /tmp/haproxy.sock level admin

defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 200
timeout client 6
timeout server 17
option clitcpka
option srvtcpka
option abortonclose
stats enable
stats uri /haproxy/stats

frontend f1
bind *:1500
option httpclose

acl internal dst 127.0.0.2
acl have_request_id req.fhdr(X-Request-Id) -m found

http-request set-header X-API-URL %[path] if !internal
http-request add-header X-Request-Timestamp %Ts.%ms
http-request set-header X-Request-Id %[req.fhdr(X-Request-Id)] if
internal have_request_id
http-request set-header X-Request-Id %{+X}o%pid-%rt if !internal ||
!have_request_id
http-request set-header X-API-Host i-12345678
http-response set-header X-API-Host i-12345678

unique-id-format %{+X}o%pid-%rt
log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r


acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

acl _path path_reg ^/foo/([^\ ?]*)$
acl _method method GET
http-request set-header X-Rewrite-Id  if !rewrite-found
_path _method
acl -rewrite req.hdr(X-Rewrite-Id) -m str 
http-request set-header X-Limit-Id %[req.hdr(X-Client-Id)] if
-rewrite
use_backend b1 if -rewrite
reqrep ^(GET)\ /foo/([^\ ?]*)([\ ?].*|$) \1\ /echo/bar/\2\3 if
-rewrite

acl _path path_reg ^/([^\ ?]*)$
acl _method method GET
http-request set-header X-Rewrite-Id  if !rewrite-found
_path _method
acl -rewrite req.hdr(X-Rewrite-Id) -m str 
http-request set-header X-Limit-Id %[req.hdr(X-Client-Id)] if
-rewrite
use_backend b1 if -rewrite
reqrep ^(GET)\ /([^\ ?]*)([\ ?].*|$) \1\ /echo/\2\3 if -rewrite

backend b1
stick-table type string len 12 size 1000 expire 1h store
http_req_rate(15000),conn_cur
tcp-request content track-sc2 req.hdr(X-Limit-ID)

acl -rewrite req.hdr(X-Rewrite-Id) -m str 
acl _req_rate sc2_http_req_rate gt 3
acl _conn_cur sc2_conn_cur gt 1
tcp-request content reject if -rewrite _req_rate
tcp-request content reject if -rewrite _conn_cur

acl -rewrite req.hdr(X-Rewrite-Id) -m str 
acl _req_rate sc2_http_req_rate gt 5
acl _conn_cur sc2_conn_cur gt 3
tcp-request content reject if -rewrite _req_rate
tcp-request content reject if -rewrite _conn_cur


server s1 127.0.0.1:2700
server s2 127.0.0.1:2701
server s3 127.0.0.1:2702



-Patrick



*From: *Baptiste bed...@gmail.com
*Sent: * 2014-03-12 06:26:32 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: tcp-request content track

 It would be easier to help you if you share your configuration!

 Baptiste

 On Wed, Mar 12, 2014 at 1:36 AM, Patrick Hemmer hapr...@stormcloud9.net 
 wrote:
 2 related questions:

 I'm trying to find a way to concat multiple samples to use in a stick table.
 Basically in my frontend I pattern match on the request path to determine
 which backend to send a request to. The client requests also have a client
 ID header. I want to rate limit based on a combination of this pattern that
 matched, and the client ID. Currently the way I do this is an http-request
 set-header rule that adds a new header combining a unique ID for the
 pattern that matched along with the client-ID header. Then in the backend I
 have a tcp-requst content track-sc2 on that header. This works, but I'm
 wondering if there's a better way.


 Secondly, the above works, but when I do a show table mybackend on the
 stats socket, both the conn_cur and use counters never decrease. They
 seem to be acting as the total number of requests, not the number of active
 connections. Is this a bug, or am I misunderstanding something?


 -Patrick



module/plugin support?

2014-03-18 Thread Patrick Hemmer
I was wondering if there were ever any thoughts about adding
module/plugin support to haproxy.

The plugin would be used for adding features to haproxy that are beyond
the scope of haproxy's core focus (fast simple load balancing).
Reading the recent radius authentication thread surprised me. I never
would have expected that to be something haproxy would support. But I
think it would make sense as a plugin.

-Patrick


Re: Radius authentication

2014-03-18 Thread Patrick Hemmer
I'm assuming it'll be generic authentication. What information will be
made available to the auth daemon? Just the Authorization header?

I would love a feature that allowed any/multiple header to be passed
through. We use haproxy on an API service, which all incoming requests
must pass in a key and signature. The signature is a hash of a secret
token, the URI and several headers. Currently each backend application
that receives the request has to perform the authentication, but it
would be awesome if we could leverage this auth daemon to perform the
authentication before passing the request through.

-Patrick


*From: *Baptiste bed...@gmail.com
*Sent: * 2014-03-18 11:03:56 E
*To: *Roel Cuppen r...@cuppie.com
*CC: *HAProxy haproxy@formilux.org
*Subject: *Re: Radius authentication

 Well, I'm currently writing a piece of code which stands behind
 HAProxy and whose purpose is to authenticate a user.
 Once authenticated, it updates HAProxy who, in turn, let the user
 browse the application and sets authentication requirement on the fly.

 I think OTP will be possible :)

 Still a lot of work to do on this project and HAProxy needs some
 patches as well, so I can't say more for now.
 Just stay tuned, I'll update the ML once done :)

 That said, if you have some requirements, this is the moment :)

 Baptiste


 On Tue, Mar 18, 2014 at 2:04 PM, Roel Cuppen r...@cuppie.com wrote:
 Hi Baptiste,

 Many thanks for your explination
 What kind of daemon is it ?


 OTP = One Time Password.

 Kind regards,

 Roel


 2014-03-18 11:03 GMT+01:00 Baptiste bed...@gmail.com:

 Hi Roel,

 Let say there are currently some developments in that way.
 It won't be part of HAProxy, but rather a third party daemon
 interacting deeply with HAProxy.

 What do you mean by OTP?

 Baptiste



 On Mon, Mar 17, 2014 at 9:43 PM, Roel Cuppen r...@cuppie.com wrote:
 Hi,

 I would like to know if it is possible to add radius authentication. So
 that
 the http authentication users kan exist in a radius database.

 Whenever a radius authentication feature is active , it;s possbile to
 add
 OTP authentication.

 Kind regards,

 Cuppie




ereq steadily increasing

2014-03-28 Thread Patrick Schless
I am running on 1.5 dev22, and doing SSL termination. Traffic seems to be
handled fine, but my ereq is steadily rising. Poking at the source, it
looks like this can be caused by a number of different errors.

What's the next step for trying to determine what's causing these? I tried
bumping my connect and cli timeouts, but that didn't change anything.


Thanks,
Patrick


Re: ereq steadily increasing

2014-03-29 Thread Patrick Schless
Some more info:

I am getting reports from users of high numbers of 504s.

My timeouts are pretty high (while trying to debug this problem), so it
doesn't seem like they are the issue:
  timeout connect 20s
  timeout client 80s
  timeout server 80s
  timeout http-keep-alive 50s

I have http logging on, but I am not seeing any 5xx responses (almost all
200s, with a low number of 4xx, which seems about right).


On Fri, Mar 28, 2014 at 7:23 PM, Patrick Schless
patrick.schl...@gmail.comwrote:

 I am running on 1.5 dev22, and doing SSL termination. Traffic seems to be
 handled fine, but my ereq is steadily rising. Poking at the source, it
 looks like this can be caused by a number of different errors.

 What's the next step for trying to determine what's causing these? I tried
 bumping my connect and cli timeouts, but that didn't change anything.


 Thanks,
 Patrick



Re: ereq steadily increasing

2014-03-29 Thread Patrick Schless
sorry, sent that before it was ready. here's the complete message:

Some more info:

I am getting reports from users of high numbers of 504s.

My timeouts are pretty high (while trying to debug this problem), so it
doesn't seem like they are the issue:
  timeout connect 20s
  timeout client 80s
  timeout server 80s
  timeout http-keep-alive 50s

I have http logging on, but I am not seeing any 5xx responses (almost all
200s, with a low number of 4xx, which seems about right).

I am tracking the count of all status codes, and it seems that the ereq
count tracks pretty closely to the number of 400s that are in the haproxy
log (though the logs have a couple more 400s than are reported by stats).

Logs for these 400s look like one of three things:
1: Mar 29 15:13:42.000 haproxy-k49 haproxy[19450]: xx.xx.xx.xx:45381
[29/Mar/2014:15:13:42.181] frontend_https~ frontend_https/NOSRV
-1/-1/-1/-1/46 400 187 - - CR-- 1031/1031/0/0/0 0/0 BADREQ

2: Mar 29 15:13:41.000 haproxy-k49 haproxy[19450]: xx.xx.xx.xx:38440
[29/Mar/2014:15:13:40.874] frontend_https~ tapp_http/tapp-p8b
394/0/1/29/424 400 183 - -  1046/1046/23/9/0 0/0 GET
/v1/data/?key=k1interval=1minfunction=meanstart=2014-03-29T20%3A13%3A44.000Zend=2014-03-29T20%3A13%3A40.623Z
HTTP/1.1

3: Mar 29 15:09:19.000 haproxy-k49 haproxy[19450]: xx.xx.xx.xx:51969
[29/Mar/2014:15:09:19.213] frontend_https~ frontend_https/NOSRV
-1/-1/-1/-1/118 400 187 - - PR-- 1087/1087/0/0/0 0/0 BADREQ

These are spread across a variety of customers and don't seem related to
SSL (since some of the errors are on the http frontend). The counts for the
various types of 400s are here:
[patrick@haproxy-k49 ~]$ sudo grep haproxy /var/log/messages | grep -E
[0-9] 400 [0-9] | awk '{print $6   $9   $11   $15}' | sed
s/:[0-9]*// | sed s/tapp-.../tapp-abc/ | sort | uniq -c | sed
s/[0-9][0-9][0-9]\\?\\./x./g
 37 x.x.x.245 tapp_http/tapp-abc 400 
 12 x.x.x.25 frontend_http/NOSRV 400 CR--
   1182 x.x.x.35 frontend_https/NOSRV 400 CR--
  1 x.x.x.94 frontend_http/NOSRV 400 CR--
 35 x.x.x.65 tapp_http/tapp-abc 400 
  8 x.x.x.29 frontend_http/NOSRV 400 PR--
 89 x.x.x.96 frontend_https/NOSRV 400 PR--


My guess is that requests like (2) are the ones that end up as 400s but
don't register as ereq's (just do to the low frequency of them).

The lines like (1) (the CR lines) I'm assuming as premature closes by the
client, and there's maybe nothing I can do about that.

For lines like (3) (the PR lines), I don't understand why the proxy is
denying them. Is there anyway to see exactly what is being sent for these
connections?

Thanks,
Patrick


On Sat, Mar 29, 2014 at 3:12 PM, Patrick Schless
patrick.schl...@gmail.comwrote:

 Some more info:

 I am getting reports from users of high numbers of 504s.

 My timeouts are pretty high (while trying to debug this problem), so it
 doesn't seem like they are the issue:
   timeout connect 20s
   timeout client 80s
   timeout server 80s
   timeout http-keep-alive 50s

 I have http logging on, but I am not seeing any 5xx responses (almost all
 200s, with a low number of 4xx, which seems about right).


 On Fri, Mar 28, 2014 at 7:23 PM, Patrick Schless 
 patrick.schl...@gmail.com wrote:

 I am running on 1.5 dev22, and doing SSL termination. Traffic seems to be
 handled fine, but my ereq is steadily rising. Poking at the source, it
 looks like this can be caused by a number of different errors.

 What's the next step for trying to determine what's causing these? I
 tried bumping my connect and cli timeouts, but that didn't change anything.


 Thanks,
 Patrick





Re: No ssl or crt in bind when compiled with USE_OPENSSL=1

2014-03-30 Thread Patrick Hemmer
1.4 does not support SSL. SSL was added in 1.5-dev12

-Patrick



*From: *Juan Jimenez jjime...@electric-cloud.com
*Sent: * 2014-03-30 02:44:42 E
*To: *haproxy@formilux.org haproxy@formilux.org
*Subject: *No ssl or crt in bind when compiled with USE_OPENSSL=1

 I am trying to figure out why haproxy 1.4.25 does not like crt and ssl in
 bindŠ

 I recompiled with:

   make TARGET=linux2628 USE_OPENSSL=1
   make install

 The cfg file looks like this:

 global 
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 #log loghostlocal0 info
 maxconn 4096
 #chroot /usr/share/haproxy
 user skytap
 group skytap
 daemon
 #debug
 #quiet

 defaults
 log global
 option  dontlognull
 retries 3
 option redispatch
 maxconn 2000
 contimeout  5000
 clitimeout  5
 srvtimeout  5

 listen stats *:1936
mode http
stats enable
stats realm Haproxy\ Statistics
stats uri /
stats refresh 30
stats show-legends

 frontend commander-server-frontend-insecure
  mode http
  bind 0.0.0.0:8000
  default_backend commander-server-backend

 frontend commander-stomp-frontend
  mode tcp
  bind 0.0.0.0:61613 ssl crt /home/skytap/server.pem
  default_backend commander-stomp-backend
  option tcplog
  log global

 frontend commander-server-frontend-secure
  mode tcp
  bind 0.0.0.0:8443 ssl crt /home/skytap/server.pem
  default_backend commander-server-backend

 backend commander-server-backend
 mode http
 server node1 10.0.0.7:8000 check
 server node2 10.0.0.9:8000 check
   server node3 10.0.0.10:8000 check
 stats enable
 option httpchk GET /commanderRequest/health

 backend commander-stomp-backend
 mode tcp
 server node1 10.0.0.7:61613 check
 server node2 10.0.0.9:61613 check
   server node3 10.0.0.10:61613 check
 option tcplog
 log global

 ‹


 And the error messages are:

 [skytap@haproxy haproxy-1.4.25]$ haproxy -c -f /etc/haproxy/haproxy.cfg
 [ALERT] 087/233343 (3964) : parsing [/etc/haproxy/haproxy.cfg:38] : 'bind'
 only supports the 'transparent', 'defer-accept', 'name', 'id', 'mss' and
 'interface' options.
 [ALERT] 087/233343 (3964) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind'
 only supports the 'transparent', 'defer-accept', 'name', 'id', 'mss' and
 'interface' options.
 [ALERT] 087/233343 (3964) : Error(s) found in configuration file :
 /etc/haproxy/haproxy.cfg
 [ALERT] 087/233343 (3964) : Fatal errors found in configuration.

 ‹‹

 ??


 Juan Jiménez
 Electric Cloud, Inc.
 Sr. Solutions Engineer - US Northeast Region
 Mobile +1.787.464.5062 | Fax +1.617-766-6980
 jjime...@electric-cloud.com
 www.electric-cloud.com http://www.electric-cloud.com/





Re: ereq steadily increasing

2014-03-30 Thread Patrick Schless
Very interesting, thanks for the tip. I only see two requests there, one of
which seems like nonsense or a vulnerability scan
(\r\n\r\n\x00\x00\x00), and the other has a space in the path that's
being requested due to improper escaping. Neither of those is a huge deal
to me, though if the downstream server (nginx) would handle the space then
I suppose I'd want to use the accept-invalid-request option.

I finally captured some 504s in the debug logging. 129 since yesterday
afternoon. They all seem to look like this:
Mar 30 14:46:19.000 haproxy-k49 haproxy[19450]: x.x.x.x:49638
[30/Mar/2014:14:45:19.533] frontend_https~ tapp_http/tapp-m2t
77/0/4/6/60081 504 343 - -  1255/1255/17/4/0 0/0 GET /data/?a=b
HTTP/1.1

I'm guessing that the 6/60081 means that 60s is some timeout threshold,
and 60.081 seconds were reached, which caused the 504. Is that correct? I
am also guessing that this is caused by slowness on the downstream
application servers. I do see a spike in the number of requests at the same
times as these 504s, and I suspect adding more downstream servers (with
balance leastconn) will help here.

I'm a little confused by the 60s timeout, though. I have these (below) in
my config, so I'd expect the the timeout that's causing the 504 is the
timeout server, which isn't 60s.

defaults
  log global
  option httplog
  retries 3
  option redispatch
  maxconn 8196
  timeout connect 20s
  timeout client 80s
  timeout server 80s
  timeout http-keep-alive 50s
  stats enable
  stats auth user:password
  stats uri /stats

Is my 80s server timeout not taking for some reason, or is the 60s some
other setting? I'm mostly just curious, since 80s was artificially high
(while I try to address these problem), and I'll probably lower it to 50s
before I'm done.

Thanks,
Patrick


On Sun, Mar 30, 2014 at 2:22 PM, Baptiste bed...@gmail.com wrote:

 Hi Patrick,

 Just issue a 'show errors' on HAProxy stats socket and you'll know why
 these request have been denied.
 You can also give a try to the 'option accept-invalid-request' to tell
 haproxy be less sensitive on HTTP checking...

 Baptiste


 On Sat, Mar 29, 2014 at 9:37 PM, Patrick Schless
 patrick.schl...@gmail.com wrote:
  sorry, sent that before it was ready. here's the complete message:
 
 
  Some more info:
 
  I am getting reports from users of high numbers of 504s.
 
  My timeouts are pretty high (while trying to debug this problem), so it
  doesn't seem like they are the issue:
timeout connect 20s
timeout client 80s
timeout server 80s
timeout http-keep-alive 50s
 
  I have http logging on, but I am not seeing any 5xx responses (almost all
  200s, with a low number of 4xx, which seems about right).
 
  I am tracking the count of all status codes, and it seems that the ereq
  count tracks pretty closely to the number of 400s that are in the haproxy
  log (though the logs have a couple more 400s than are reported by stats).
 
  Logs for these 400s look like one of three things:
  1: Mar 29 15:13:42.000 haproxy-k49 haproxy[19450]: xx.xx.xx.xx:45381
  [29/Mar/2014:15:13:42.181] frontend_https~ frontend_https/NOSRV
  -1/-1/-1/-1/46 400 187 - - CR-- 1031/1031/0/0/0 0/0 BADREQ
 
  2: Mar 29 15:13:41.000 haproxy-k49 haproxy[19450]: xx.xx.xx.xx:38440
  [29/Mar/2014:15:13:40.874] frontend_https~ tapp_http/tapp-p8b
 394/0/1/29/424
  400 183 - -  1046/1046/23/9/0 0/0 GET
 
 /v1/data/?key=k1interval=1minfunction=meanstart=2014-03-29T20%3A13%3A44.000Zend=2014-03-29T20%3A13%3A40.623Z
  HTTP/1.1
 
  3: Mar 29 15:09:19.000 haproxy-k49 haproxy[19450]: xx.xx.xx.xx:51969
  [29/Mar/2014:15:09:19.213] frontend_https~ frontend_https/NOSRV
  -1/-1/-1/-1/118 400 187 - - PR-- 1087/1087/0/0/0 0/0 BADREQ
 
  These are spread across a variety of customers and don't seem related to
 SSL
  (since some of the errors are on the http frontend). The counts for the
  various types of 400s are here:
  [patrick@haproxy-k49 ~]$ sudo grep haproxy /var/log/messages | grep -E
  [0-9] 400 [0-9] | awk '{print $6   $9   $11   $15}' | sed
  s/:[0-9]*// | sed s/tapp-.../tapp-abc/ | sort | uniq -c | sed
  s/[0-9][0-9][0-9]\\?\\./x./g
   37 x.x.x.245 tapp_http/tapp-abc 400 
   12 x.x.x.25 frontend_http/NOSRV 400 CR--
 1182 x.x.x.35 frontend_https/NOSRV 400 CR--
1 x.x.x.94 frontend_http/NOSRV 400 CR--
   35 x.x.x.65 tapp_http/tapp-abc 400 
8 x.x.x.29 frontend_http/NOSRV 400 PR--
   89 x.x.x.96 frontend_https/NOSRV 400 PR--
 
 
  My guess is that requests like (2) are the ones that end up as 400s but
  don't register as ereq's (just do to the low frequency of them).
 
  The lines like (1) (the CR lines) I'm assuming as premature closes by the
  client, and there's maybe nothing I can do about that.
 
  For lines like (3) (the PR lines), I don't understand why the proxy is
  denying them. Is there anyway to see exactly what is being sent for these
  connections?
 
  Thanks,
  Patrick
 
 
  On Sat, Mar 29, 2014 at 3:12 PM, Patrick Schless

haproxy intermittently not connecting to backend

2014-04-01 Thread Patrick Hemmer
We have an issue with haproxy (1.5-dev22-1a34d57) where it is
intermittently not connecting to the backend server. However the
behavior it is exhibiting seems strange.
The reason I say strange is that in one example, it logged that the
client disconnected after ~49 seconds with a connection flags of CC--.
However our config has timeout connect 5000, so it should have timed
out connecting to the backend server after 5 seconds. Additionally we
have retries 3 in the config, so upon timing out, it should have tried
another backend server, but it never did (the retries counter in the log
shows 0).
At the time of this log entry, the backend server is responding
properly. For the ~49 seconds prior to the log entry, the backend server
has taken other requests. The backend server is also another haproxy
(same version).

Here's an example of one such log entry:

198.228.211.13:60848 api~ platform-push/i-84d931a5 49562/0/-1/-1/49563 
0/0/0/0/0 0/0 691/212 
span class=t style=border-color: rgb(204, 204, 204); font-style: normal; 
cursor: pointer;503 CC-- 4F8E-4624 + GET 
/1/sync/notifications/subscribe?sync_box_id=12345sender=27B9A93C-F473-4385-A662-352AD34A2453
 HTTP/1.1


The log format is defined as:
%ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\
%U/%B\ %ST\ %tsc\ %ID\ +\ %r

Running a show errors on the stats socket did not return any relevant
results.

Here's the relevant portions of the haproxy config. It is not the entire
thing as the whole config is 1,513 lines long.

global
  log 127.0.0.1 local0
  maxconn 20480
  user haproxy
  group haproxy
  daemon
  stats socket /var/run/hapi/haproxy/haproxy.sock level admin

defaults
  log global
  mode http
  option httplog
  option dontlognull
  option log-separate-errors
  retries 3
  option redispatch
  timeout connect 5000
  timeout client 6
  timeout server 17
  option clitcpka
  option srvtcpka
  option abortonclose
  option splice-auto
  monitor-uri /haproxy/ping
  stats enable
  stats uri /haproxy/stats
  stats refresh 15
  stats auth user:pass

frontend api
  bind *:80
  bind *:443 ssl crt /etc/haproxy/server.pem
  maxconn 2
  option httpclose
  option forwardfor
  acl internal src 10.0.0.0/8
  acl have_request_id req.fhdr(X-Request-Id) -m found
  http-request set-nice -100 if internal
  http-request add-header X-API-URL %[path] if !internal
  http-request add-header X-Request-Timestamp %Ts.%ms
  http-request add-header X-Request-Id %[req.fhdr(X-Request-Id)] if
internal have_request_id
  http-request set-header X-Request-Id %{+X}o%pid-%rt if !internal ||
!have_request_id
  http-request add-header X-API-Host i-4a3b1c6a
  unique-id-format %{+X}o%pid-%rt
  log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r
  default_backend DEFAULT_404

  acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

  acl nqXn_path path_reg ^/1/sync/notifications/subscribe/([^\ ?]*)$
  acl nqXn_method method OPTIONS GET HEAD POST PUT DELETE TRACE CONNECT
PATCH
  http-request set-header X-Rewrite-Id nqXn if !rewrite-found nqXn_path
nqXn_method
  acl rewrite-nqXn req.hdr(X-Rewrite-Id) -m str nqXn
  use_backend platform-push if rewrite-nqXn
  reqrep ^(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PATCH)\
/1/sync/notifications/subscribe/([^\ ?]*)([\ ?].*|$) \1\
/1/sync/subscribe/\2\3 if rewrite-nqXn


backend platform-push
  option httpchk GET /ping
  default-server inter 15s fastinter 1s
  server i-6eaf724d 10.230.23.64:80 check observe layer4
  server i-84d931a5 10.230.42.8:80 check observe layer4



Re: haproxy intermittently not connecting to backend

2014-04-01 Thread Patrick Hemmer
Apologies, my mail client went stupid. Here's the log entry unmangled:

198.228.211.13:60848 api~ platform-push/i-84d931a5 49562/0/-1/-1/49563
0/0/0/0/0 0/0 691/212 503 CC-- 4F8E-4624 + GET
/1/sync/notifications/subscribe?sync_box_id=12496sender=D7A9F93D-F653-4527-A022-383AD55A1943
HTTP/1.1

-Patrick



*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2014-04-01 15:20:15 E
*To: *haproxy@formilux.org
*Subject: *haproxy intermittently not connecting to backend

 We have an issue with haproxy (1.5-dev22-1a34d57) where it is
 intermittently not connecting to the backend server. However the
 behavior it is exhibiting seems strange.
 The reason I say strange is that in one example, it logged that the
 client disconnected after ~49 seconds with a connection flags of
 CC--. However our config has timeout connect 5000, so it should
 have timed out connecting to the backend server after 5 seconds.
 Additionally we have retries 3 in the config, so upon timing out, it
 should have tried another backend server, but it never did (the
 retries counter in the log shows 0).
 At the time of this log entry, the backend server is responding
 properly. For the ~49 seconds prior to the log entry, the backend
 server has taken other requests. The backend server is also another
 haproxy (same version).

 Here's an example of one such log entry:

  198.228.211.13:60848 api~ platform-push/i-84d931a5 49562/0/-1/-1/49563 
 0/0/0/0/0 0/0 691/212
   
 lt;
 span class=t style=border-color: rgb(204, 204, 204); font-style: normal; 
 cursor: pointer;503 CC-- 4F8E-4624 + GET 
 /1/sync/notifications/subscribe?sync_box_
  id=12345sender=27B9A93C-F473-4385-A662-352AD34A2453 HTTP/1.1

 The log format is defined as:
 %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ac/%fc/%bc/%sc/%rc\
 %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r

 Running a show errors on the stats socket did not return any
 relevant results.

 Here's the relevant portions of the haproxy config. It is not the
 entire thing as the whole config is 1,513 lines long.

 global
   log 127.0.0.1 local0
   maxconn 20480
   user haproxy
   group haproxy
   daemon
   stats socket /var/run/hapi/haproxy/haproxy.sock level admin

 defaults
   log global
   mode http
   option httplog
   option dontlognull
   option log-separate-errors
   retries 3
   option redispatch
   timeout connect 5000
   timeout client 6
   timeout server 17
   option clitcpka
   option srvtcpka
   option abortonclose
   option splice-auto
   monitor-uri /haproxy/ping
   stats enable
   stats uri /haproxy/stats
   stats refresh 15
   stats auth user:pass

 frontend api
   bind *:80
   bind *:443 ssl crt /etc/haproxy/server.pem
   maxconn 2
   option httpclose
   option forwardfor
   acl internal src 10.0.0.0/8
   acl have_request_id req.fhdr(X-Request-Id) -m found
   http-request set-nice -100 if internal
   http-request add-header X-API-URL %[path] if !internal
   http-request add-header X-Request-Timestamp %Ts.%ms
   http-request add-header X-Request-Id %[req.fhdr(X-Request-Id)] if
 internal have_request_id
   http-request set-header X-Request-Id %{+X}o%pid-%rt if !internal ||
 !have_request_id
   http-request add-header X-API-Host i-4a3b1c6a
   unique-id-format %{+X}o%pid-%rt
   log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
 %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r
   default_backend DEFAULT_404

   acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

   acl nqXn_path path_reg ^/1/sync/notifications/subscribe/([^\ ?]*)$
   acl nqXn_method method OPTIONS GET HEAD POST PUT DELETE TRACE
 CONNECT PATCH
   http-request set-header X-Rewrite-Id nqXn if !rewrite-found
 nqXn_path nqXn_method
   acl rewrite-nqXn req.hdr(X-Rewrite-Id) -m str nqXn
   use_backend platform-push if rewrite-nqXn
   reqrep ^(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PATCH)\
 /1/sync/notifications/subscribe/([^\ ?]*)([\ ?].*|$) \1\
 /1/sync/subscribe/\2\3 if rewrite-nqXn


 backend platform-push
   option httpchk GET /ping
   default-server inter 15s fastinter 1s
   server i-6eaf724d 10.230.23.64:80 check observe layer4
   server i-84d931a5 10.230.42.8:80 check observe layer4




Re: modifing default haproxy emit codes

2014-04-02 Thread Patrick Hemmer
You want the errorfile config param.
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#errorfile

-Patrick



*From: *Piavlo lolitus...@gmail.com
*Sent: * 2014-04-02 15:16:22 E
*To: *haproxy@formilux.org
*Subject: *modifing default haproxy emit codes

  Hi,

 According to the docs:

 Haproxy may emit the following status codes by itself :
503  when no server was available to handle the request, or in
 response to
 monitoring requests which match the monitor fail condition
504  when the response timeout strikes before the server responds

 Instead what I need is that if no server is available for or if server
 does not send and http response within a defined timeout is that
 haproxy respond with 204. Is it possible?
 I assume I can define fallback backend that will redispatch the
 requests to a dumb http server that always answers with 204? Instead
 it would be much better if haproxy itself could reply with 204.

 tnx




Re: haproxy intermittently not connecting to backend

2014-04-02 Thread Patrick Hemmer
That makes perfect sense. Thank you very much.

-Patrick


*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-04-02 15:38:04 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org
*Subject: *Re: haproxy intermittently not connecting to backend

 Hi Patrick,

 On Tue, Apr 01, 2014 at 03:20:15PM -0400, Patrick Hemmer wrote:
 We have an issue with haproxy (1.5-dev22-1a34d57) where it is
 intermittently not connecting to the backend server. However the
 behavior it is exhibiting seems strange.
 The reason I say strange is that in one example, it logged that the
 client disconnected after ~49 seconds with a connection flags of CC--.
 However our config has timeout connect 5000, so it should have timed
 out connecting to the backend server after 5 seconds. Additionally we
 have retries 3 in the config, so upon timing out, it should have tried
 another backend server, but it never did (the retries counter in the log
 shows 0).
 No, retries impacts only retries to the same server, it's option redispatch
 which allows the last retry to be performed on another server. But you have
 it anyway.

 At the time of this log entry, the backend server is responding
 properly. For the ~49 seconds prior to the log entry, the backend server
 has taken other requests. The backend server is also another haproxy
 (same version).

 Here's an example of one such log entry:
 [fixed version pasted here]

 198.228.211.13:60848 api~ platform-push/i-84d931a5 49562/0/-1/-1/49563 
 0/0/0/0/0 0/0 691/212 503 CC-- 4F8E-4624 + GET 
 /1/sync/notifications/subscribe?sync_box_id=12496sender=D7A9F93D-F653-4527-A022-383AD55A1943
  HTTP/1.1
 OK in fact the client did not wait 49 seconds. If you look closer, you'll
 see that the client remained silent for 49 seconds (typically a connection
 pool or a preconnect) and closed immediately after sending the request (in
 the same millisecond). Since you have option abortonclose, the connection
 was aborted before the server had a chance to respond.

 So I can easily imagine that you randomly get this error, you're in a race
 condition, if the server responds immediately, you win the race and the
 request is handled, otherwise it's aborted.

 Please start by removing option abortonclose, I think it will fix the issue.
 Second thing you can do is to remove option httpclose or replace it with
 option http-server-close which is active and not just passive. The 
 connections
 will last less time on your servers which is always appreciated.

 I'm not seeing any other issue, so with just this you should be fine.

 The log format is defined as:
 %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\
 %U/%B\ %ST\ %tsc\ %ID\ +\ %r

 Running a show errors on the stats socket did not return any relevant
 results.

 Here's the relevant portions of the haproxy config. It is not the entire
 thing as the whole config is 1,513 lines long.

 global
   log 127.0.0.1 local0
   maxconn 20480
   user haproxy
   group haproxy
   daemon
   stats socket /var/run/hapi/haproxy/haproxy.sock level admin

 defaults
   log global
   mode http
   option httplog
   option dontlognull
   option log-separate-errors
   retries 3
   option redispatch
   timeout connect 5000
   timeout client 6
   timeout server 17
   option clitcpka
   option srvtcpka
   option abortonclose
   option splice-auto
   monitor-uri /haproxy/ping
   stats enable
   stats uri /haproxy/stats
   stats refresh 15
   stats auth user:pass

 frontend api
   bind *:80
   bind *:443 ssl crt /etc/haproxy/server.pem
   maxconn 2
   option httpclose
   option forwardfor
   acl internal src 10.0.0.0/8
   acl have_request_id req.fhdr(X-Request-Id) -m found
   http-request set-nice -100 if internal
   http-request add-header X-API-URL %[path] if !internal
   http-request add-header X-Request-Timestamp %Ts.%ms
   http-request add-header X-Request-Id %[req.fhdr(X-Request-Id)] if
 internal have_request_id
   http-request set-header X-Request-Id %{+X}o%pid-%rt if !internal ||
 !have_request_id
   http-request add-header X-API-Host i-4a3b1c6a
   unique-id-format %{+X}o%pid-%rt
   log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
 %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r
   default_backend DEFAULT_404

   acl rewrite-found req.hdr(X-Rewrite-ID,1) -m found

   acl nqXn_path path_reg ^/1/sync/notifications/subscribe/([^\ ?]*)$
   acl nqXn_method method OPTIONS GET HEAD POST PUT DELETE TRACE CONNECT
 PATCH
   http-request set-header X-Rewrite-Id nqXn if !rewrite-found nqXn_path
 nqXn_method
   acl rewrite-nqXn req.hdr(X-Rewrite-Id) -m str nqXn
   use_backend platform-push if rewrite-nqXn
   reqrep ^(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PATCH)\
 /1/sync/notifications/subscribe/([^\ ?]*)([\ ?].*|$) \1\
 /1/sync/subscribe/\2\3 if rewrite-nqXn


 backend platform-push
   option httpchk GET /ping
   default-server inter 15s fastinter 1s

timeout after USR1 signal

2014-04-07 Thread Patrick Viet
Hi,

I'm wondering if there is some way to have some kind of timeout after
haproxy receives the USR1 soft stop signal sent by haproxy -sf old
process. I don't want to kill everyone brutally, but if I have a client
that's hanging on for a connection way too long after a reconfiguration,
may he die!! For now I'm thinking about something that sends a TERM after
an hour or so, but there might be something that's a bit cleaner.

Didn't find anything in the docs. Or am I looking badly?

Hints? Experience? Ideas?

Thanks!

Patrick


suppress reqrep / use_backend warning

2014-04-08 Thread Patrick Hemmer
Would it be possible to get an option to suppress the warning when a
reqrep rule is placed after a use_backend rule?
[WARNING] 097/205824 (4777) : parsing
[/var/run/hapi/haproxy/haproxy.cfg:1443] : a 'reqrep' rule placed after
a 'use_backend' rule will still be processed before.

I prefer keeping my related rules grouped together, and so this message
pops up every time haproxy is (re)started. Currently it logs out 264
lines each start (I have a lot of rules), and is thus fairly annoying. I
am well aware of what the message means and my configuration is not
affected by it.

-Patrick


haproxy mis-reporting layer 4 checks

2014-04-10 Thread Patrick Hemmer
I've brought up this bug before
(http://marc.info/?l=haproxym=139312718801838), but it seems to not
have gotten any attention, so I'm raising it again.

There is an issue with haproxy mis-reporting layer 4 checks. There are
2, likely related, issues.
1) When haproxy first starts up, it will report the server as UP 1/3
for however long the check interval is set to. If the interval is 30
seconds, it will say UP 1/3 for 30 seconds.
2) Haproxy is adding the check interval time to the time of the check
itself. For example, if I have a check interval of 30 seconds, the
statistics output reports a check completion time of 30001ms.

Attached is a simple configuration that can be used to demonstrate this
issue. Launch haproxy, and then go to http://localhost/haproxy/stats

-Patrick
global
log 127.0.0.1   local0

defaults
log global
modehttp
option  httplog
timeout connect 5000
timeout client 6
timeout server 17

stats   enable
stats   uri /haproxy/stats

frontend f1
bind 0.0.0.0:9000

default_backend b1

backend b1
server s1 localhost:9001 check inter 1

frontend f2
bind 0.0.0.0:9001


Re: haproxy mis-reporting layer 4 checks

2014-04-11 Thread Patrick Hemmer


*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-04-11 08:29:15 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org
*Subject: *Re: haproxy mis-reporting layer 4 checks

 Hi Patrick,

 On Thu, Apr 10, 2014 at 02:17:02PM -0400, Patrick Hemmer wrote:
 I've brought up this bug before
 (http://marc.info/?l=haproxym=139312718801838), but it seems to not
 have gotten any attention, so I'm raising it again.

 There is an issue with haproxy mis-reporting layer 4 checks. There are
 2, likely related, issues.
 1) When haproxy first starts up, it will report the server as UP 1/3
 for however long the check interval is set to. If the interval is 30
 seconds, it will say UP 1/3 for 30 seconds.
 2) Haproxy is adding the check interval time to the time of the check
 itself. For example, if I have a check interval of 30 seconds, the
 statistics output reports a check completion time of 30001ms.
 We used to have this bug in a certain version (I don't remember which
 one), but I fail to reproduce it anymore with latest master using your
 configuration. What version are you running ?

 Willy

Ah, you're right. I was testing against 1.5-dev22. Using the latest
master this is fixed.

Sorry for the noise.

-Patrick



Re: haproxy intermittently not connecting to backend

2014-04-11 Thread Patrick Hemmer
This just keeps coming back to bug me. I don't think the client closing
the connection should result in a 5XX code. 5XX should indicate a server
issue, and the client closing the connection before the server has a
chance to respond isn't a server issue. Only if the server doesn't
respond within the configured timeout should it be a 5XX.

Nginx uses 499 for client closed connection. Perhaps haproxy could use
that status code as well when `option abortonclose` is used.

-Patrick



*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2014-04-02 15:50:22 E
*To: *haproxy@formilux.org
*Subject: *Re: haproxy intermittently not connecting to backend

 That makes perfect sense. Thank you very much.

 -Patrick

 
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-04-02 15:38:04 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *haproxy@formilux.org
 *Subject: *Re: haproxy intermittently not connecting to backend

 Hi Patrick,

 On Tue, Apr 01, 2014 at 03:20:15PM -0400, Patrick Hemmer wrote:
 We have an issue with haproxy (1.5-dev22-1a34d57) where it is
 intermittently not connecting to the backend server. However the
 behavior it is exhibiting seems strange.
 The reason I say strange is that in one example, it logged that the
 client disconnected after ~49 seconds with a connection flags of CC--.
 However our config has timeout connect 5000, so it should have timed
 out connecting to the backend server after 5 seconds. Additionally we
 have retries 3 in the config, so upon timing out, it should have tried
 another backend server, but it never did (the retries counter in the log
 shows 0).
 No, retries impacts only retries to the same server, it's option redispatch
 which allows the last retry to be performed on another server. But you have
 it anyway.

 At the time of this log entry, the backend server is responding
 properly. For the ~49 seconds prior to the log entry, the backend server
 has taken other requests. The backend server is also another haproxy
 (same version).

 Here's an example of one such log entry:
 [fixed version pasted here]

 198.228.211.13:60848 api~ platform-push/i-84d931a5 49562/0/-1/-1/49563 
 0/0/0/0/0 0/0 691/212 503 CC-- 4F8E-4624 + GET 
 /1/sync/notifications/subscribe?sync_box_id=12496sender=D7A9F93D-F653-4527-A022-383AD55A1943
  HTTP/1.1
 OK in fact the client did not wait 49 seconds. If you look closer, you'll
 see that the client remained silent for 49 seconds (typically a connection
 pool or a preconnect) and closed immediately after sending the request (in
 the same millisecond). Since you have option abortonclose, the connection
 was aborted before the server had a chance to respond.

 So I can easily imagine that you randomly get this error, you're in a race
 condition, if the server responds immediately, you win the race and the
 request is handled, otherwise it's aborted.

 Please start by removing option abortonclose, I think it will fix the 
 issue.
 Second thing you can do is to remove option httpclose or replace it with
 option http-server-close which is active and not just passive. The 
 connections
 will last less time on your servers which is always appreciated.

 I'm not seeing any other issue, so with just this you should be fine.

 The log format is defined as:
 %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\
 %U/%B\ %ST\ %tsc\ %ID\ +\ %r

 Running a show errors on the stats socket did not return any relevant
 results.

 Here's the relevant portions of the haproxy config. It is not the entire
 thing as the whole config is 1,513 lines long.

 global
   log 127.0.0.1 local0
   maxconn 20480
   user haproxy
   group haproxy
   daemon
   stats socket /var/run/hapi/haproxy/haproxy.sock level admin

 defaults
   log global
   mode http
   option httplog
   option dontlognull
   option log-separate-errors
   retries 3
   option redispatch
   timeout connect 5000
   timeout client 6
   timeout server 17
   option clitcpka
   option srvtcpka
   option abortonclose
   option splice-auto
   monitor-uri /haproxy/ping
   stats enable
   stats uri /haproxy/stats
   stats refresh 15
   stats auth user:pass

 frontend api
   bind *:80
   bind *:443 ssl crt /etc/haproxy/server.pem
   maxconn 2
   option httpclose
   option forwardfor
   acl internal src 10.0.0.0/8
   acl have_request_id req.fhdr(X-Request-Id) -m found
   http-request set-nice -100 if internal
   http-request add-header X-API-URL %[path] if !internal
   http-request add-header X-Request-Timestamp %Ts.%ms
   http-request add-header X-Request-Id %[req.fhdr(X-Request-Id)] if
 internal have_request_id
   http-request set-header X-Request-Id %{+X}o%pid-%rt if !internal ||
 !have_request_id
   http-request add-header X-API-Host i-4a3b1c6a
   unique-id-format %{+X}o%pid-%rt
   log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
 %ac/%fc/%bc

Re: suppress reqrep / use_backend warning

2014-04-13 Thread Patrick Hemmer


*From: *Cyril Bonté cyril.bo...@free.fr
*Sent: * 2014-04-13 11:15:26 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org
*Subject: *Re: suppress reqrep / use_backend warning

 Hi Patrick,

 Le 08/04/2014 23:04, Patrick Hemmer a écrit :
 Would it be possible to get an option to suppress the warning when a
 reqrep rule is placed after a use_backend rule?
 [WARNING] 097/205824 (4777) : parsing
 [/var/run/hapi/haproxy/haproxy.cfg:1443] : a 'reqrep' rule placed after
 a 'use_backend' rule will still be processed before.

 I prefer keeping my related rules grouped together, and so this message
 pops up every time haproxy is (re)started. Currently it logs out 264
 lines each start (I have a lot of rules), and is thus fairly annoying. I
 am well aware of what the message means and my configuration is not
 affected by it.


 Do you want to ignore every warnings or only some warnings ?
I would think ignoring only some warnings would be preferable. Ignoring
all warnings might lead to people disabling them all, and then when a
new warning comes up that hasn't been seen before, it'll be missed.


 For the first case you can use the global keyword quiet (or its
 command line equivalent -q).
Ah, didn't know `quiet` would suppress warnings as well. This might be
acceptable.


 For the second one, there is nothing available yet, but I was thinking
 of something like annotations in configuration comments.
 For example :
 - @ignore-warnings to ignore the warnings of the current line
 - @BEGIN ignore-warnings to start a block of lines where warnings will
 be ignored
 - @END ignore-warnings to stop ignoring warnings.

 frontend test :
   mode http
   reqrep ^([^\ :]*)\ /static/(.*) \1\ /\2
   block if TRUE   # @ignore-warnings
   block if FALSE  # @ignore-warnings
   block if TRUE
   block if TRUE
   block if TRUE
   # @BEGIN ignore-warnings
   block if TRUE
   block if TRUE
   block if TRUE
   block if TRUE
   block if TRUE
   # @END ignore-warnings
   block if TRUE
   block if TRUE
   block if TRUE

 Please find a quick and dirty patch to illustrate. Is this something
 that could be useful ?
Hadn't really thought about the best way to solve it until now. I like
the per-line suppression more than the @BEGIN/@END one. The only other
way I can think of doing this is by having a config directive such as:
ignore-warnings reqrep_use_backend
Which would suppress all occurrences of that specific warning. But then
the warning message itself would need some sort of identifier on it so
we knew what argument to pass to 'ignore-warnings'

I'll play with the patch tomorrow, see how manageable it is.

But really, this is a trivial matter. I'd be OK with whatever is decided.


-Patrick


Re: haproxy intermittently not connecting to backend

2014-04-14 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-04-14 11:27:59 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org
*Subject: *Re: haproxy intermittently not connecting to backend

 Hi Patrick,

 On Sat, Apr 12, 2014 at 01:38:54AM -0400, Patrick Hemmer wrote:
 This just keeps coming back to bug me. I don't think the client closing
 the connection should result in a 5XX code. 5XX should indicate a server
 issue, and the client closing the connection before the server has a
 chance to respond isn't a server issue. Only if the server doesn't
 respond within the configured timeout should it be a 5XX.

 Nginx uses 499 for client closed connection. Perhaps haproxy could use
 that status code as well when `option abortonclose` is used.
 It's wrong to invent new status codes, because they'll sooner or later
 conflict with one officially assigned (or worse, they'll become so much
 used that they'll make it harder to improve the standards).
RFC2616 says HTTP status codes are extensible and even gives a
specific scenario how the client should handle an unregistered code
(look for the if an unrecognized status code of 431 is received by the
client example).

 I get your point though. I'm used to say that 5xx is an error that the
 client should not be able to induce. This is not totally right nowadays
 due to 501, nor due to web services which like to return 500 when they
 want to return false... But in general that's the idea.

 However here it's not as black and white. If a client manages to close
 before the connection to the server is opened, it's generally because
 the server takes time to accept the connection. The longer it takes,
 the higher the number of 503 due to client aborts. What we should try
 to avoid in my opinion is to return 503 immediately. I think that the
 semantics of 408 are the closest to what we're expecting in terms of
 asking the client to retry if needed, eventhough that's a different
 technical issue. I'd rather not use plain 400 to avoid polluting the
 real 400 that admins have a hard time trying to fix sometimes.
I disagree with the statement that we should avoid immediate response
when the connection is closed. Going back to RFC1945 (HTTP 1.0), we have
this:
In any case, the closing of the connection by either or both parties
always terminates the current request, regardless of its status.
But that is HTTP 1.0, so it's validity in this case is tenuous. I
couldn't find a similar statement in RFC2616, or anything which states
how it should be handled when the client closes it's connection prior to
response. I guess this is why it's a configurable option :-)

If we want to use a registered status code, I would argue in favor of
417 which has the following in it's description:
if the server is a proxy, the server has unambiguous evidence that the
request could not be met by the next-hop server

Would it be difficult to add a parameter to the option? Such as option
httpclose 417 to control how haproxy responds?

 Any optinion on this ?

 Willy





haproxy incorrectly reporting connection flags

2014-04-16 Thread Patrick Hemmer
With 1.5-dev22, we have a scenario where haproxy is saying the client
closed the connection, but really the server is the one that closed it.

Here is the log entry from haproxy:
haproxy[12540]: 10.230.0.195:33580 storage_upd storage_upd/storage_upd_2
0/0/0/522/555 0/0/0/0/0 0/0 412/271 200 CD-- 73E3-20FF5 + GET
/1/public_link/1BMcSfqg3OM4Ng HTTP/1.1

The log format is defined as:
capture request header X-Request-Id len 64
log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %hrl\ +\ %r

Attached is the haproxy config, along with a packet capture between
haproxy and the backend server.
The packet capture shows that the backend server listening on port 4001
sent a TCP FIN packet to haproxy first. Therefore haproxy shouldn't have
logged it with C---

-Patrick
global
log 127.0.0.1   local0
maxconn 20480
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.sock

defaults
log global
modehttp
option  httplog
option  dontlognull
retries 3
option  redispatch
timeout connect 5000
timeout client 6
timeout server 17
option  clitcpka
option  srvtcpka
option  abortonclose
option  splice-auto

stats   enable
stats   uri /haproxy/stats
stats   refresh 5
stats   auth user:pass

frontend storage_upd
bind 0.0.0.0:80
bind 0.0.0.0:81 accept-proxy
default_backend storage_upd
maxconn 2

capture request header X-Request-Id len 64

http-request add-header X-Request-Timestamp %Ts.%ms

log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ 
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %hrl\ +\ %r

backend storage_upd
fullconn 2
server storage_upd_1 127.0.0.1:4000 check  
server storage_upd_2 127.0.0.1:4001 check  
server storage_upd_3 127.0.0.1:4002 check  
server storage_upd_4 127.0.0.1:4003 check  


haproxy.pcap
Description: Binary data


Re: haproxy incorrectly reporting connection flags

2014-04-22 Thread Patrick Hemmer

*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2014-04-16 17:38:54 E
*To: *haproxy@formilux.org haproxy@formilux.org
*Subject: *haproxy incorrectly reporting connection flags

 With 1.5-dev22, we have a scenario where haproxy is saying the client
 closed the connection, but really the server is the one that closed it.

 Here is the log entry from haproxy:
 haproxy[12540]: 10.230.0.195:33580 storage_upd
 storage_upd/storage_upd_2 0/0/0/522/555 0/0/0/0/0 0/0 412/271 200 CD--
 73E3-20FF5 + GET /1/public_link/1BMcSfqg3OM4Ng HTTP/1.1

 The log format is defined as:
 capture request header X-Request-Id len 64
 log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
 %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %hrl\ +\ %r

 Attached is the haproxy config, along with a packet capture between
 haproxy and the backend server.
 The packet capture shows that the backend server listening on port
 4001 sent a TCP FIN packet to haproxy first. Therefore haproxy
 shouldn't have logged it with C---

 -Patrick

Any feedback on this?
I can happily provide any additional information if needed.

-Patrick


Re: haproxy incorrectly reporting connection flags

2014-04-23 Thread Patrick Hemmer
 

*From: *Cyril Bonté cyril.bo...@free.fr
*Sent: * 2014-04-23 02:37:07 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy incorrectly reporting connection flags

 Hi Patrick,

 Le 23/04/2014 03:25, Patrick Hemmer a écrit :
 Any feedback on this?
 I can happily provide any additional information if needed.

 Didn't you see Lukas' mail ? That's exactly what he asked for ;-)


Sorry about that. I see it on the mailing list archive, but not in my
client :-(

In response to Lukas' email:
Yes, I can reliably reproduce the issue. Here's another one with pcaps
of the eth0 and lo interfaces.

2014-04-23T13:52:35.438+00:00 local0/info(6) haproxy[12631]:
10.230.1.210:45631 storage_upd storage_upd/storage_upd_2 0/0/0/932/1004
0/0/0/0/0 0/0 299/270 200 CD-- 738E-8D38 + GET
/1/public_link/1BMcSfqg3OM4Ng HTTP/1.1

-Patrick


haproxy-eth0.pcap
Description: application/vnd.tcpdump.pcap


haproxy-lo.pcap
Description: application/vnd.tcpdump.pcap


Re: haproxy incorrectly reporting connection flags

2014-04-23 Thread Patrick Hemmer
*From: *Lukas Tribus luky...@hotmail.com
*Sent: * 2014-04-23 12:16:01 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *RE: haproxy incorrectly reporting connection flags


 Sorry about that. I see it on the mailing list archive, but not in my 
 client :-(
 Probably catched by a spam filter, I did respond directly to you and the
 mailing list.




 Yes, I can reliably reproduce the issue. Here's another one with pcaps 
 of the eth0 and lo interfaces.
 Can you also provide ./haproxy -vv output please.



 Thanks,

 Lukas

HA-Proxy version 1.5-dev22-1a34d57 2014/02/03
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_ZLIB=1
USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3.4
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1 14 Mar 2012
Running on OpenSSL version : OpenSSL 1.0.1 14 Mar 2012
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.12 2011-01-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


-Patrick


HELLO

2014-04-26 Thread kish patrick
Friend.
 
I decided to share this lucrative opportunity with you, I am the foreign 
operations manager of our bank here in my country, Burkina Faso West Africa. i 
am married with four children. I want you to assist me in other to transfer the 
sum of into your reliable account as the business partner to our Foreign 
Business partner,Since the deceased left no body behind to claim the fund, as a 
foreigner, you are in better position for that, and nobody will come for the 
claim after you have applied. If you are ready to assist me, set up a new bank 
account or forward to me any one available so that the process will commence. I 
will guide you on how you should apply for the claim so that everything will be 
smooth and correct. After the transfer, i will resign and come over to your 
country for the sharing of the fund 50/50 base on the fact that it is two man 
businesses. 
 
Finally, note that you are not taking any risk because there will be a legal 
back up as we commence. Further information will be given to you as soon as I 
receive your reply.
 
Your friend.
 




Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 02:02:11 E
*To: *Rachel Chavez rachel.chave...@gmail.com
*CC: *haproxy@formilux.org
*Subject: *Re: please check

 On Thu, May 01, 2014 at 03:44:46PM -0400, Rachel Chavez wrote:
 The problem is:

 when client sends a request with incomplete body (it has content-length but
 no body) then haproxy returns a 5XX error when it should be a client issue.
 It's a bit more complicated than that. When the request body flows from the
 client to the server, at any moment the server is free to respond (either
 with an error, a redirect, a timeout or whatever). So as soon as we start
 to forward a request body from the client to the server, we're *really*
 waiting for the server to send a verdict about that request.
At any moment the server is free to respond yes, but the server cannot
respond *properly* until it gets the complete request.
If the response depends on the request payload, the server doesn't know
whether to respond with 200 or with a 400.

RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
(Continue) Status. This section indicates that it should not be
expected for the server to respond without a request body unless the
client explicitly sends a Expect: 100-continue



 In the session.c file starting in 2404 i make sure that if I haven't
 received the entire body of the request I continue to wait for it by
 keeping AN_REQ_WAIT_HTTP as part of the request analyzers list as long as
 the client read timeout hasn't fired yet.
 It's unrelated unfortunately and it cannot work. AN_REQ_WAIT_HTTP is meant
 to wait for a *new* request. So if the client doesn't send a complete
 request, it's both wrong and dangerous to expect a new request inside the
 body. When the body is being forwarded, the request flows through
 http_request_forward_body(). This one already tests for the client timeout
 as you can see. I'm not seeing any error processing there though, maybe
 we'd need to set some error codes there to avoid them getting the default
 ones.

 In the proto_http.c file what I tried to do is avoid getting a server
 timeout when the client had timed-out already.
 I agree that it's always the *first* timeout which strikes which should
 indicate the faulty side, because eventhough they're generally set to the
 same value, people who want to enforce a specific processing can set them
 apart.

 Regards,
 Willy



Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 11:15:07 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 Hi Patrick,

 On Fri, May 02, 2014 at 10:57:38AM -0400, Patrick Hemmer wrote:
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 02:02:11 E
 *To: *Rachel Chavez rachel.chave...@gmail.com
 *CC: *haproxy@formilux.org
 *Subject: *Re: please check

 On Thu, May 01, 2014 at 03:44:46PM -0400, Rachel Chavez wrote:
 The problem is:

 when client sends a request with incomplete body (it has content-length but
 no body) then haproxy returns a 5XX error when it should be a client issue.
 It's a bit more complicated than that. When the request body flows from the
 client to the server, at any moment the server is free to respond (either
 with an error, a redirect, a timeout or whatever). So as soon as we start
 to forward a request body from the client to the server, we're *really*
 waiting for the server to send a verdict about that request.
 At any moment the server is free to respond yes, but the server cannot
 respond *properly* until it gets the complete request.
 Yes it can, redirects are the most common anticipated response, as the
 result of a POST to a page with an expired cookie. And the 302 is a
 clean response, it's not even an error.
I should have clarified what I meant by properly more. I didn't mean
that the server can't respond at all, as there are many cases it can,
some of which you point out. I meant that if the server is expecting a
request body, it can't respond with a 200 until it verifies that request
body.

 If the response depends on the request payload, the server doesn't know
 whether to respond with 200 or with a 400.
 With WAFs deployed massively on server infrastructures, 403 are quite
 common long before the whole data. 413 request entity too large appears
 quite commonly as well. 401 and 407 can also happen when authentication
 is needed.

 RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
 (Continue) Status. This section indicates that it should not be
 expected for the server to respond without a request body unless the
 client explicitly sends a Expect: 100-continue
 Well, 2616 is 15-years old now and pretty obsolete, which is why the
 HTTP-bis WG is working on refreshing this. New wording is clearer about
 how a request body is used :

o  A server MAY omit sending a 100 (Continue) response if it has
   already received some or all of the message body for the
   corresponding request, or if the framing indicates that there is
   no message body.

 Note the some or all.
I'm assuming you're quoting from:
http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-5.1.1

This only applies if the Expect: 100-continue was sent. Expect:
100-continue was meant to solve the issue where the client has a large
body, and wants to make sure that the server will accept the body before
sending it (and wasting bandwidth). Meaning that without sending
Expect: 100-continue, it is expected that the server will not send a
response until the body has been sent.


 It's very tricky to find which side is responsible for a stalled upload.
 I've very commonly found that frozen servers, or those with deep request
 queues will stall during body transfers because they still didn't start
 to consume the part of the request that's queued into network buffers.

 All I mean is that it's unfortunately not *that* white and black. We
 *really* need to make a careful difference between what happens on the
 two sides. The (hard) goal I'm generally seeking is to do my best so
 that a misbehaving user doesn't make us believe that a server is going
 badly. That's not easy, considering for example the fact that the 501
 message could be understood as a server error while it's triggered by
 the client.

 In general (unless there's something wrong with the way client timeouts
 are reported in http_request_forward_body), client timeouts should be
 reported as such, and same for server timeouts. It's possible that there
 are corner cases, but we need to be extremely careful about them and not
 try to generalize.
I agree, a client timeout should be reported as such, and that's what
this is all about. If the client sends half the body (or no body), and
then freezes, the client timeout should kick in and send back a 408, not
the server timeout resulting in a 504.

I think in this regards it is very clear.
* The server may respond with the HTTP response status code any time it
feels like it.
* Enable the server timeout and disable the client timeout upon any of
the following:
* The client sent Expect: 100-continue and has completed all headers
* The complete client request has been sent, including body if
Content-Length  0
* Writing to the server socket would result in a blocking write
(indicating that the remote end is not processing).
* Enable the client timeout

Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 12:56:16 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 On Fri, May 02, 2014 at 12:18:43PM -0400, Patrick Hemmer wrote:
 At any moment the server is free to respond yes, but the server cannot
 respond *properly* until it gets the complete request.
 Yes it can, redirects are the most common anticipated response, as the
 result of a POST to a page with an expired cookie. And the 302 is a
 clean response, it's not even an error.
 I should have clarified what I meant by properly more. I didn't mean
 that the server can't respond at all, as there are many cases it can,
 some of which you point out. I meant that if the server is expecting a
 request body, it can't respond with a 200 until it verifies that request
 body.
 OK, but from a reverse-proxy point of view, all of them are equally valid,
 and there's even no way to know if the server is interested in receiving
 these data at all. The only differences are that some of them are considered
 precious (ie those returning 200) and other ones less since they're
 possibly ephemeral.

 If the response depends on the request payload, the server doesn't know
 whether to respond with 200 or with a 400.
 With WAFs deployed massively on server infrastructures, 403 are quite
 common long before the whole data. 413 request entity too large appears
 quite commonly as well. 401 and 407 can also happen when authentication
 is needed.

 RFC2616 covers this behavior in depth. See 8.2.3 Use of the 100
 (Continue) Status. This section indicates that it should not be
 expected for the server to respond without a request body unless the
 client explicitly sends a Expect: 100-continue
 Well, 2616 is 15-years old now and pretty obsolete, which is why the
 HTTP-bis WG is working on refreshing this. New wording is clearer about
 how a request body is used :

o  A server MAY omit sending a 100 (Continue) response if it has
   already received some or all of the message body for the
   corresponding request, or if the framing indicates that there is
   no message body.

 Note the some or all.
 I'm assuming you're quoting from:
 http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-5.1.1
 Yes indeed. Ah in fact I found the exact part I was looking for, it's in
 the same block, two points below :

o  A server that responds with a final status code before reading the
   entire message body SHOULD indicate in that response whether it
   intends to close the connection or continue reading and discarding
   the request message (see Section 6.6 of [Part1]).

 This only applies if the Expect: 100-continue was sent. Expect:
 100-continue was meant to solve the issue where the client has a large
 body, and wants to make sure that the server will accept the body before
 sending it (and wasting bandwidth). Meaning that without sending
 Expect: 100-continue, it is expected that the server will not send a
 response until the body has been sent.
 No, it is expected that it will need to consume all the data before the
 connection may be reused for sending another request. That is the point
 of 100. And the problem is that if the server closes the connection when
 responding early (typically a 302) and doesn't drain the client's data,
 there's a high risk that the TCP stack will send an RST that can arrive
 before the actual response, making the client unaware of the response.
 That's why the server must consume the data even if it responds before
 the end.
 A 100-continue expectation informs recipients that the client is
   about to send a (presumably large) message body in this request and
   wishes to receive a 100 (Continue) interim response if the request-
   line and header fields are not sufficient to cause an immediate
   success, redirect, or error response.  This allows the client to wait
   for an indication that it is worthwhile to send the message body
   before actually doing so, which can improve efficiency when the
   message body is huge or when the client anticipates that an error is
   likely


 (...)
 In general (unless there's something wrong with the way client timeouts
 are reported in http_request_forward_body), client timeouts should be
 reported as such, and same for server timeouts. It's possible that there
 are corner cases, but we need to be extremely careful about them and not
 try to generalize.
 I agree, a client timeout should be reported as such, and that's what
 this is all about. If the client sends half the body (or no body), and
 then freezes, the client timeout should kick in and send back a 408, not
 the server timeout resulting in a 504.
 Yes, I agree with this description.

 I think in this regards it is very clear.
 * The server may respond with the HTTP response status code any time it
 feels like it.
 OK

 * Enable the server timeout and disable the client

Re: please check

2014-05-02 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 14:00:24 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
 I've set up a test scenario, and the only time haproxy responds with 408
 is if the client times out in the middle of request headers. If the
 client has sent all headers, but no body, or partial body, it times out
 after the configured 'timeout server' value, and responds with 504.
 OK that's really useful. I'll try to reproduce that case. Could you please
 test again with a shorter client timeout than server timeout, just to ensure
 that it's not just a sequencing issue ?
I have. In my test setup, timeout client 1000 and timeout server 2000.

With incomplete headers I get:
haproxy[8893]: 127.0.0.1:41438 [02/May/2014:14:11:26.373] f1 f1/NOSRV
-1/-1/-1/-1/1001 408 212 - - cR-- 0/0/0/0/0 0/0 BADREQ

With no body I get:
haproxy[8893]: 127.0.0.1:41439 [02/May/2014:14:11:29.576] f1 b1/s1
0/0/0/-1/2002 504 194 - - sH-- 1/1/1/1/0 0/0 GET / HTTP/1.1

With incomplete body I get:
haproxy[8893]: 127.0.0.1:41441 [02/May/2014:14:11:29.779] f1 b1/s1
0/0/0/-1/2002 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1




 Applying the patch solves this behavior. But my test scenario is very
 simple, and I'm not sure if it has any other consequences.
 It definitely has, which is why I'm trying to find the *exact* problem in
 order to fix it.

 Thanks,
 Willy



-Patrick


Re: please check

2014-05-02 Thread Patrick Hemmer
 



*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-02 15:06:13 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
*Subject: *Re: please check

 On Fri, May 02, 2014 at 02:14:41PM -0400, Patrick Hemmer wrote:
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2014-05-02 14:00:24 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *Rachel Chavez rachel.chave...@gmail.com, haproxy@formilux.org
 *Subject: *Re: please check

 On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
 I've set up a test scenario, and the only time haproxy responds with 408
 is if the client times out in the middle of request headers. If the
 client has sent all headers, but no body, or partial body, it times out
 after the configured 'timeout server' value, and responds with 504.
 OK that's really useful. I'll try to reproduce that case. Could you please
 test again with a shorter client timeout than server timeout, just to ensure
 that it's not just a sequencing issue ?
 I have. In my test setup, timeout client 1000 and timeout server 2000.

 With incomplete headers I get:
 haproxy[8893]: 127.0.0.1:41438 [02/May/2014:14:11:26.373] f1 f1/NOSRV
 -1/-1/-1/-1/1001 408 212 - - cR-- 0/0/0/0/0 0/0 BADREQ

 With no body I get:
 haproxy[8893]: 127.0.0.1:41439 [02/May/2014:14:11:29.576] f1 b1/s1
 0/0/0/-1/2002 504 194 - - sH-- 1/1/1/1/0 0/0 GET / HTTP/1.1

 With incomplete body I get:
 haproxy[8893]: 127.0.0.1:41441 [02/May/2014:14:11:29.779] f1 b1/s1
 0/0/0/-1/2002 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1
 Great, thank you. I think that it tends to fuel the theory that the
 response error is not set where it should be in the forwarding path.

 I'll check this ASAP. BTW, it would be nice if you could check this
 as well with 1.4.25, I guess it does the same.

 Best regards,
 Willy

Confirmed. Exact same behavior with 1.4.25

-Patrick



Re: please check

2014-05-06 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-06 17:41:18 E
*To: *Patrick Hemmer hapr...@stormcloud9.net, Rachel Chavez
rachel.chave...@gmail.com
*CC: *haproxy@formilux.org
*Subject: *Re: please check

 Hi Patrick, hi Rachel,

 I might have fixed half of the issue, I'd like you to test the attached patch.
 It ensures that the client-side timeout is only disabled after transmitting
 the whole body and not during the transmission. It will report cD in the
 flags, but does not affect the status code yet. It does not abort when the
 client timeout strikes, but still when the server timeout strikes, which is
 another tricky thing to do properly. That's why I would be happy if you could
 at least confirm that you correctly get cD or sH (or even sD) depending on
 who times out first.

 Thanks,
 Willy

So good news, bad news, and strange news.

The good news: It is reporting cD-- as it should

The bad news: It's not reporting any return status at all. Before it
would log 504 and send a 504 response back. Now it logs -1 and doesn't
send anything back. It's just closing the connection.

The strange news: Contrary to your statement, the client connection is
closed after the 1 second timeout. It even logs this. The only thing
that doesn't happen properly is the absence of any response. Just
immediate connection close.


Before patch:
haproxy[26318]: 127.0.0.1:51995 [06/May/2014:18:55:33.002] f1 b1/s1
0/0/0/-1/2001 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1

After patch:
haproxy[27216]: 127.0.0.1:52027 [06/May/2014:18:56:34.165] f1 b1/s1
0/0/0/-1/1002 -1 0 - - cD-- 0/0/0/0/0 0/0 GET / HTTP/1.1

-Patrick


Re: please check

2014-05-07 Thread Patrick Hemmer

*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-07 09:45:47 E
*To: *Patrick Hemmer hapr...@stormcloud9.net, Rachel Chavez
rachel.chave...@gmail.com
*CC: *haproxy@formilux.org
*Subject: *Re: please check

 Hi Patrick, hi Rachel,

 so with these two patches applied on top of the previous one, I get the
 behaviour that we discussed here.

 Specifically, we differentiate client-read timeout, server-write timeouts
 and server read timeouts during the data forwarding phase. Also, we disable
 server read timeout until the client has sent its whole request. That way
 I'm seeing the following flags in the logs :

   - cH when client does not send everything before the server starts to
 respond, which is OK. Status=408 there.

   - cD when client stops sending data after the server starts to respond,
 or if the client stops reading data, which in both case is a clear
 client timeout. In both cases, the status is unaltered and nothing
 is emitted since the beginning of the response was already transmitted ;

   - sH when the server does not respond, including if it stops reading the
 message body (eg: process stuck). Then we have 504.

   - sD if the server stops reading or sending data during the data phase.

 The changes were a bit tricky, so any confirmation from any of you would
 make me more comfortable merging them into mainline. I'm attaching these
 two extra patches, please give them a try.

 Thanks,
 Willy

Works beautifully. I had created a little test suite to test to test a
bunch of conditions around this, and they all pass.
Will see about throwing this in our development environment in the next
few days if a release doesn't come out before then.

Thank you much :-)

-Patrick


Re: unique-id-header with capture request header

2014-05-13 Thread Patrick Hemmer
*From: *Bryan Talbot bryan.tal...@playnext.com
*Sent: * 2014-05-13 11:52:32 E
*To: *HAProxy haproxy@formilux.org
*Subject: *unique-id-header with capture request header

 We have more than 1 proxy tier. The edge proxy generates a unique ID
 and the other tiers (and apps in between) log the value and pass it
 around as a per-request id.

 Middle tier haproxy instances capture and log the unique id using
 capture request header which works fine; however, for the edge proxy
 this doesn't work since the ID doesn't seem to be available as a
 request header yet and a custom log-format must be used instead.

 This means that the logs generated by edge and non-edge proxies have a
 different format and be parsed specially. Also, a custom log-format
 must be maintained just for this purpose.

 What's the best way to capture the unique-id generated in a proxy or
 from a request and log it in a consistent way?

 -Bryan


We do this exact same thing. Unique ID generated on the outside, and
passed around on the inside.
However we use a custom log format on both the internal and external
haproxy so that the format is exactly the same.

External:
unique-id-format %{+X}o%pid-%rt
capture request header X-Request-Id len 12
log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\
%ID,%[capture.req.hdr(0)]\ +\ %r

Internal:
capture request header X-Request-Id len 12
log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %[capture.req.hdr(0)]\ +\ %r

We capture on external as well as occasionally an internal service will
re-route back through the external, and so it gets a new ID, and we can
correlate the two. But the fields are space delimited, so that even
showing 2 IDs, it's still a single field separated by comma (assuming
some client doesn't pass an invalid value in. but that's also why this
is at the end of the log line)

The other advantage of doing it that the default http log format is
missing %U, which we find useful to have.

-Patrick


Disable TLS renegotiation

2014-05-16 Thread Patrick Hemmer
While going through the Qualys SSL test
(https://www.ssllabs.com/ssltest), one of the items it mentions is a DoS
vulnerability in regards to client-side initiated SSL renegotiation
(https://community.qualys.com/blogs/securitylabs/2011/10/31/tls-renegotiation-and-denial-of-service-attacks).
While researching the subject, it seems that the only reliable way to
mitigate the issue is in the server software. Apache has implemented
code to disable renegotiation. Would it be possible to add an option in
haproxy to disable it?

-Patrick


Re: Disable TLS renegotiation

2014-05-16 Thread Patrick Hemmer

*From: *Lukas Tribus luky...@hotmail.com
*Sent: * 2014-05-16 13:23:43 E
*To: *Patrick Hemmer hapr...@stormcloud9.net, haproxy@formilux.org
haproxy@formilux.org
*Subject: *RE: Disable TLS renegotiation

 Hi Patrick,


 While going through the Qualys SSL test  
 (https://www.ssllabs.com/ssltest), one of the items it mentions is a  
 DoS vulnerability in regards to client-side initiated SSL renegotiation  
 (https://community.qualys.com/blogs/securitylabs/2011/10/31/tls-renegotiation-and-denial-of-service-attacks).
   
 While researching the subject, it seems that the only reliable way to  
 mitigate the issue is in the server software. Apache has implemented  
 code to disable renegotiation. Would it be possible to add an option in  
 haproxy to disable it?
 Looks like its already disabled by default?

 https://www.ssllabs.com/ssltest/analyze.html?d=demo.1wt.eu

 --- Secure Client-Initiated Renegotiation:
   No
 --- Insecure Client-Initiated Renegotiation:
   No



 Regards,

 Lukas

 
Doh!

You're right, I screwed up the test when I ran it. Yes, it is disabled.
Sorry for the noise.

-Patrick


Re: Error 408 with Chrome

2014-05-26 Thread Patrick Hemmer

*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-26 12:07:09 EDT
*To: *Arnall arnall2...@gmail.com
*CC: *haproxy@formilux.org
*Subject: *Re: Error 408 with Chrome

 On Mon, May 26, 2014 at 05:52:15PM +0200, Arnall wrote:
 Le 26/05/2014 16:13, Willy Tarreau a écrit :
 Hi Arnall,

 On Mon, May 26, 2014 at 11:56:52AM +0200, Arnall wrote:
 Hi Willy,

 same problem here with Chrome version 35.0.1916.114 m and :
 HA-Proxy version 1.4.22 2012/08/09 (Debian 6) Kernel 3.8.13-OVH
 HA-Proxy version 1.5-dev24-8860dcd 2014/04/26 (Debian GNU/Linux 7.5)
 Kernel 3.10.13-OVH

 htmlbodyh1408 Request Time-out/h1
 Your browser didn't send a complete request in time.
 /body/html

 Timing : Blocking 2ms /  Receiving : 1ms
 Where are you measuring this ? I suspect on the browser, right ? In
 this case it confirms the malfunction of the preconnect. You should
 take a network capture which will be usable as a reliable basis for
 debugging. I'm pretty sure that what you'll see in fact is the following
 sequence :

browser haproxy
--- connect --
... long pause ...
 408 + FIN ---
... long pause ...
--- send request -
 RST -

 And you see the error in the browser immediately. The issue is then
 caused by the browser not respecting this specific rule :

  
 Yes it was measured on the browser (Chrome network monitor)
 I 've made a network capture for you.(in attachment)
 Thank you. If you looked at the connection from port 62691, it's exactly
 the sequence I described above. So that clearly explains what Chrome is
 the only one affected!

 Best regards,
 Willy


Has anyone opened a bug against Chrome for this behavior (did a brief
search and didn't see one)? I'd be interested in following it as this
behavior will likely have an impact on an upcoming project I've got.

-Patrick


Re: HAProxy 1.5 release?

2014-06-18 Thread Patrick Hemmer
Haproxy 1.6 is very close to release.
See http://marc.info/?l=haproxym=140129354705695 and
http://marc.info/?l=haproxym=140085816115800

-Patrick


*From: *Stephen Balukoff sbaluk...@bluebox.net
*Sent: * 2014-06-18 08:40:55 EDT
*To: *haproxy@formilux.org
*Subject: *HAProxy 1.5 release?

 Hey Willy!

 I'm involved in a group that is building a highly-scalable open source
 virtual appliance-based load balancer for use with cloud operating
 systems like OpenStack. We are planning on making haproxy the core
 component of the solution we're building.

 At my company we've actually been using haproxy 1.5 for a couple years
 now in production to great effect, and absolutely love it. But I'm
 having trouble getting the rest of the members of my team to go along
 with the idea of using 1.5 in our solution simply because of its
 official status as a development branch. There are just so many
 useful new features in 1.5 that I'd really rather not have to go back
 to 1.4 in our solution...

 So! My question is: What can we do to help y'all bring the 1.5 branch
 far enough along such that y'all are comfortable releasing it as the
 official stable branch of haproxy? (Note we do have people in our
 group with connections in some of the major linux distros who can help
 to fast-track its adoption into official releases of said distros.)

 Thanks,
 Stephen

 -- 
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807



Re: HAProxy 1.5 release?

2014-06-18 Thread Patrick Hemmer
Err, pardon the typo, 1.5 :-)

-Patrick


*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2014-06-18 08:49:27 EDT
*To: *Stephen Balukoff sbaluk...@bluebox.net, haproxy@formilux.org
*Subject: *Re: HAProxy 1.5 release?

 Haproxy 1.6 is very close to release.
 See http://marc.info/?l=haproxym=140129354705695 and
 http://marc.info/?l=haproxym=140085816115800

 -Patrick

 
 *From: *Stephen Balukoff sbaluk...@bluebox.net
 *Sent: * 2014-06-18 08:40:55 EDT
 *To: *haproxy@formilux.org
 *Subject: *HAProxy 1.5 release?

 Hey Willy!

 I'm involved in a group that is building a highly-scalable open
 source virtual appliance-based load balancer for use with cloud
 operating systems like OpenStack. We are planning on making haproxy
 the core component of the solution we're building.

 At my company we've actually been using haproxy 1.5 for a couple
 years now in production to great effect, and absolutely love it. But
 I'm having trouble getting the rest of the members of my team to go
 along with the idea of using 1.5 in our solution simply because of
 its official status as a development branch. There are just so many
 useful new features in 1.5 that I'd really rather not have to go back
 to 1.4 in our solution...

 So! My question is: What can we do to help y'all bring the 1.5 branch
 far enough along such that y'all are comfortable releasing it as the
 official stable branch of haproxy? (Note we do have people in our
 group with connections in some of the major linux distros who can
 help to fast-track its adoption into official releases of said
 distros.)

 Thanks,
 Stephen

 -- 
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807




Re: 3rd regression : enough is enough!

2014-06-23 Thread Patrick Hemmer

*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-06-23 10:23:44 EDT
*To: *haproxy@formilux.org
*CC: *Patrick Hemmer hapr...@stormcloud9.net, Rachel Chavez
rachel.chave...@gmail.com
*Subject: *3rd regression : enough is enough!

 Hi guys,

 today we got our 3rd regression caused by the client-side timeout changes
 introduced in 1.5-dev25. And this one is a major one, causing FD leaks
 and CPU spins when servers do not advertise a content-length and the
 client does not respond to the FIN.  And the worst of it, is I have no
 idea how to fix this at all.

 I had that bitter feeling when doing these changes a month ago that
 they were so much tricky that something was obviously going to break.
 It has broken twice already and we could fix the issues. The second
 time was quite harder, and we now see the effect of the regressions
 and their workarounds spreading like an oil stain on paper, with
 workarounds becoming more and more complex and less under control.

 So in the end I have reverted all the patches responsible for these
 regressions. The purpose of these patches was to report cD instead
 of sD in the logs in the case where a client disappears during a
 POST and haproxy has a shorter timeout than the server's.

 I'll issue 1.5.1 shortly with the fix before everyone gets hit by busy
 loops and lacks of file descriptors. If we find another way to do it
 later, we'll try it in 1.6 and may consider backpoting to 1.5 if the
 new solution is absolutely safe. But we're very far away from that
 situation now.

 I'm sorry for this mess just before the release, next time I'll be
 stricter about such dangerous changes that I don't feel at ease with.

 Willy



This is unfortunate. I'm guessing a lot of the issue was in ensuring the
client timeout was observed. Would it at least be possible to change the
response, so that even if the server timeout is what kills the request,
that the client gets sent back a 408 instead of a 503?

-Patrick


Re: 3rd regression : enough is enough!

2014-06-24 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-06-24 01:33:41 EDT
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org, Rachel Chavez rachel.chave...@gmail.com
*Subject: *Re: 3rd regression : enough is enough!

 Hi Patrick,

 On Mon, Jun 23, 2014 at 09:30:11PM -0400, Patrick Hemmer wrote:
 This is unfortunate. I'm guessing a lot of the issue was in ensuring the
 client timeout was observed. Would it at least be possible to change the
 response, so that even if the server timeout is what kills the request,
 that the client gets sent back a 408 instead of a 503?
 For now I have no idea. All the mess came from the awful changes that
 were needed to ignore the server-side timeout and pretend it came from
 the client despite the server triggering first. This required to mess
 up with these events in a very dangerous way :-(

 So right now I'd suggest to try with a shorter client timeout than the
 server timeout. 
That's what we're doing. The 'timeout client' is set to 6, 'timeout
server' is set to 17

 I can try to see how to better *report* this specific
 event if needed, but I don't want to put the brown paper bag on
 timeouts anymore.

 Regards,
 Willy




Re: 3rd regression : enough is enough!

2014-06-24 Thread Patrick Hemmer

*From: *Lukas Tribus luky...@hotmail.com
*Sent: * 2014-06-24 06:44:44 EDT
*To: *Willy Tarreau w...@1wt.eu, Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org, Rachel Chavez
rachel.chave...@gmail.com
*Subject: *RE: 3rd regression : enough is enough!


 
 Date: Tue, 24 Jun 2014 07:33:41 +0200
 From: w...@1wt.eu
 To: hapr...@stormcloud9.net
 CC: haproxy@formilux.org; rachel.chave...@gmail.com
 Subject: Re: 3rd regression : enough is enough!

 Hi Patrick,

 On Mon, Jun 23, 2014 at 09:30:11PM -0400, Patrick Hemmer wrote:
 This is unfortunate. I'm guessing a lot of the issue was in ensuring the
 client timeout was observed. Would it at least be possible to change the
 response, so that even if the server timeout is what kills the request,
 that the client gets sent back a 408 instead of a 503?
 For now I have no idea. All the mess came from the awful changes that
 were needed to ignore the server-side timeout and pretend it came from
 the client despite the server triggering first. This required to mess
 up with these events in a very dangerous way :-(

 So right now I'd suggest to try with a shorter client timeout than the
 server timeout. I can try to see how to better *report* this specific
 event if needed, but I don't want to put the brown paper bag on
 timeouts anymore.
 I fully agree.


 But perhaps we can document the current behavior in those particular
 conditions better, so that this is better known until we find a good
 code-based solution.


 What is the issue here exactly? When the client uploads large POST
 requests and the server timeout is larger than the client timeout,
 we will see a sD flag in the log? Is that all, or are there other
 conditions in which a client timeout trigger a sD log?
The issue is that the 'timeout client' parameter isn't being observed
once the client goes into the data phase. If the server is waiting for
data (http body), it won't send anything back until the client sends in
a body. Since 'timeout client' isn't being observed, the 'timeout
server' kicks in and haproxy responds with a 503 because the server took
too long to respond when it was really the client's issue because the
client didn't send a body. This is supposed to be a 408.

 Can it be workarounded completely by configuring the server timeout
 larger then the client timeout?





 Regards,

 Lukas

 



Re: Getting size of response

2014-08-26 Thread Patrick Hemmer
*From:* Nick Jennings n...@silverbucket.net
*Sent:* 2014-08-26 19:55:34 EDT
*To:* haproxy haproxy@formilux.org
*Subject:* Getting size of response

 Hi all, is there a way to get the size of a response as it's being
 sent out through haproxy during logging? The node.js app (restify) is
 sending gzip'd responses but not including a Content-Length due to
 some bug. I was wondering if I could get the size with haproxy and
 side-step the whole issue.

 Thanks,
 Nick


If you're using `option httplog`, the 8th field is bytes sent to client.
If you're using `option tcplog`, it's the 7th field.
See http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.2.3

-Patrick




Re: Spam to this list?

2014-09-05 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-09-05 11:19:22 EDT
*To: *Ghislain gad...@aqueos.com
*CC: *Mark Janssen maniac...@gmail.com, david rene comba lareu
shadow.of.sou...@gmail.com, Colin Ingarfield co...@ingarfield.com,
haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: Spam to this list?

 On Fri, Sep 05, 2014 at 04:32:55PM +0200, Ghislain wrote:
 hi,

   this is not spam but some bad behavior of a person  that is 
 inscribing the mail of this mailing list to newsletter just to annoy 
 people.
  This guy must be laughing like mad about how such a looser he is but 
 no spam filter will prevent this, there is no filter against human 
 stupidity that is legal in our country.
 That's precisely the point unfortunately :-/

 And the other annoying part are those recurring claims from people who
 know better than anyone else and who pretend that they can magically run
 a mail server with no spam. That's simply nonsense and utterly false. Or
 they have such aggressive filters that they can't even receive the complaints
 from their users when mails are eaten. Everyone can do that, it's enough
 to alias haproxy@formilux.org to /dev/null to get the same effect!

 But the goal of the ML is not to block the maximum amount of spam but to
 ensure optimal delivery to its subscribers. As soon as you add some level
 of filtering, you automatically get some increasing amount of false positive.

 We even had to drop one filter a few months ago because some gmail users
 could not post anymore.

 I'm open to suggestions, provided that :

1) it doesn't add *any* burden on our side (scalability means that the
   processing should be distributed, not centralized)

2) it doesn't block any single valid e-mail, even from non-subscriber
Have it ever been tried enabling a spam filter in a dry-run mode? Run it
for a year, and just have it add a header indicating whether it would
have blocked the message. Then see if any legitimate messages would have
been blocked.

I also want to point out that the mailing list itself sometimes lands on
various blacklists because of the amount of spam coming from it. So now
users using mail providers subscribing to these blacklists are not just
not losing a few messages, they're losing every message.


3) it doesn't require anyone to resubscribe nor change their ingress
   filters to get the mails into the same box.

4) it doesn't add extra delays to posts (eg: no grey-listing) because
   that's really painful for people who post patches and are impatient
   to see them reach the ML.
In the past you stated that you have grey-listing enable (
http://marc.info/?l=haproxym=139748200027362w=2 ), and here you're
stating that you don't want it. Now I'm confused which is really the case.
If indeed grey-listing is not enabled, why not enable it for
non-subscribers? I'd bet that all the people sending patches are subscribed.

 I'm always amazed how people are anonyed with spam in 2014. Spam is part
 of the internet experience and is so ubiquitous that I think these people
 have been living under a rock. Probably those people consider that we
 should also run blood tests on people who want to jump into a bus to
 ensure that they don't come in with any minor disease in hope that all
 diseases will finally disappear. I'm instead in the camp of those who
 consider that training the population is the best resistance, and I think
 that all living being history already proved me right.
I would argue the opposite, this is 2014, we should have capable spam
handling technologies. And indeed we do!
The thing is that spam handling has to be handled on the original
recipient of the email (haproxy@formilux.org). Once the message has been
sent through a relay (the mailing list), many spam filtering
capabilities no longer work (DNSBL, greylisting, SPF, etc). Thus it is
the responsibility of the relay to do the filtering.


 I probably received 5 more spams while writing this, and who cares!
Obviously quite a few people care.
This is your list, and I respect that, but your opinion seems to be the
minority.

You've stated in the past that you don't like it when actions result in
people unsubscribing from the list. How many people unsubscribe because
they are tired of the spam?

I know I barely pay as much attention to the mailing list as I used to
because of the amount of spam. Oh look, a message. SPAM. Oh look, a
message. SPAM again...

-Patrick


Re: Spam to this list?

2014-09-05 Thread Patrick Hemmer
*From: *Cyril Bonté cyril.bo...@free.fr
*Sent: * 2014-09-05 15:50:21 EDT
*To: *Patrick Hemmer hapr...@stormcloud9.net, Willy Tarreau
w...@1wt.eu, Ghislain gad...@aqueos.com
*CC: *Mark Janssen maniac...@gmail.com, david rene comba lareu
shadow.of.sou...@gmail.com, Colin Ingarfield co...@ingarfield.com,
haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: Spam to this list?

 Hi,

 Le 05/09/2014 20:39, Patrick Hemmer a écrit :
 Obviously quite a few people care.
 This is your list, and I respect that, but your opinion seems to be the
 minority.

 Without facts, this is as true as if I argue that the majority doesn't
 care that much but are more annoyed by the amount of mails containing
 the subject Spam to this list?.



Ah, but your facts are in the discussion thread. I've seen very few
people supporting the current state of things. Yes it is possible there
are other people who haven't replied, but I think we can make a couple
deductions:
* Those who have strong feelings on the matter have already reported in
* Those who havent either:
   * don't have a strong opinion
   * feel their stance is sufficiently represented.
   * haven't checked their mail
In all cases, I think it would be reasonable to assume that the sample
set already provided would reflect the general trend of further
responses. Meaning that the majority opinion would remain the majority
opinion.

-Patrick



using HAProxy in front of SSO

2014-12-09 Thread Patrick Kaeding
Hello

I'm interested in using HAProxy as my external-facing proxy, in front
of my applications. I want to implement an SSO application to handle
authentication (similar to what is described here:
http://dejanglozic.com/2014/10/07/sharing-micro-service-authentication-using-nginx-passport-and-redis/).

Nginx has the ngx_http_auth_request_module
(http://nginx.org/en/docs/http/ngx_http_auth_request_module.html),
which looks like it would work well, but I am wondering if I can do
this with HAProxy, and not need Nginx as a second layer in front of my
applicaitons.

Can HAProxy make subrequests to determine how to handle the incoming
request? Are there any resources I should check out to help with this?

Thanks!
-- 
Patrick Kaeding
pkaed...@launchdarkly.com



Re: Question on rewriting query string

2015-05-07 Thread Patrick Slattery

I ended up trying out the new Lua functionality in 1.6 and was able to get this 
to work with it.

haproxy.cfg
global
lua-load /path/query.lua

frontend FE-HTTP
bind 127.0.0.1:80
mode http
http-request lua query

query.lua
function query(txn)
local mydate = txn.sc:http_date(txn.f:date())
local sid = txn.f:url_param(sid)
local sid_guid = txn.f:url_param(sid_guid)
local strid = txn.f:url_param(strid)

local response = 

response = response .. HTTP/1.1 301 Moved Permanently\n
response = response .. Content-Type: text/html; charset=iso-8859-1\n
response = response .. Date:  .. mydate .. \n
response = response .. Location:  .. http://shop.companyx.com/shop.aspx?; .. sid= .. sid  .. sid_guid= .. 
sid_guid .. strid= .. strid .. shopurl=search.aspx .. \n
response = response .. Content-Length:  .. buffer:len() .. \n
response = response .. Connection: close\n
response = response .. \n

txn.res:send(response)
txn:close()
end

I'm just curious if this is the right way to do this in HAProxy?


On May 07, 2015, at 10:56 AM, Patrick Slattery patrickmslatt...@mac.com wrote:


Hi, I'm trying to figure out how rewrite an incoming query string such as:

http://www.example.com/?domain=companyx.comsdn=sid=123456789sid_guid=d8bfbc1a-c790-4cf8-beec-ffbbf72d9476k=mystringstrid=1e9611e961t=%20?

becomes:

http://shop.companyx.com/shop.aspx?sid=123456789sid_guid=d8bfbc1a-c790-4cf8-beec-ffbbf72d9476strid=1e961shopurl=search.aspx

I can see how to extract each of the query params with urlp(parmname) but I 
don't see any obvious way of reassembling the query string from the extracted 
variables.
Is this practical in HAProxy (any version) or should I look at using some other 
tool for this?

Thanks.




Question on rewriting query string

2015-05-07 Thread Patrick Slattery

Hi, I'm trying to figure out how rewrite an incoming query string such as:

http://www.example.com/?domain=companyx.comsdn=sid=123456789sid_guid=d8bfbc1a-c790-4cf8-beec-ffbbf72d9476k=mystringstrid=1e9611e961t=%20?

becomes:

http://shop.companyx.com/shop.aspx?sid=123456789sid_guid=d8bfbc1a-c790-4cf8-beec-ffbbf72d9476strid=1e961shopurl=search.aspx

I can see how to extract each of the query params with urlp(parmname) but I 
don't see any obvious way of reassembling the query string from the extracted 
variables.
Is this practical in HAProxy (any version) or should I look at using some other 
tool for this?

Thanks.




Re: Question on rewriting query string

2015-05-08 Thread Patrick Slattery

Wow, very nice, regular expressions sure are powerful :-)

Here is what I ended up with:
defaults
 mode http
 timeout connect 1s
 timeout client 1s
 timeout server 1s
listen HTTP-in
 bind 127.0.0.1:80
 reqrep .*(sid=[a-z0-9A-Z]*)(sid_guid=[^]*).*(strid=[0-9a-zA-Z]*) 
\1\2\3shopurl=search.aspx
 redirect code 301 prefix http://shop.companyx.com/shop.aspx?


curl -i 
http://localhost/?sid=100026264sid_guid=342ca0f2-beb0-4132-b188-5f162941bb83;

HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: 
http://shop.companyx.com/shop.aspx/?sid=100026264sid_guid=342ca0f2-beb0-4132-b188-5f162941bb83
Connection: close


On May 07, 2015, at 05:27 PM, Aleksandar Lazic al-hapr...@none.at wrote:


I would use reqrep
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-reqrep

Example:
reqrep .*(sid=[a-z0-9A-Z]*)(sid_guid=[^]*).*(strid=[0-9a-zA-Z]*)
http://shop.companyx.com/shop.aspx?\1\2\3shopurl=search.aspx

I have build the regex with https://regex101.com/

Maybe there is a option to use a more generic way as in lua with.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#url_param

BR Aleks




mailing list archives dead

2016-04-04 Thread Patrick Hemmer
It looks like the mailing list archives stopped working mid-December.

https://marc.info/?l=haproxy

-Patrick


Re: unique-id-header and req.hdr

2017-01-27 Thread Patrick Hemmer


On 2017/1/27 14:38, Cyril Bonté wrote:
> Le 27/01/2017 à 20:11, Ciprian Dorin Craciun a écrit :
>> On Fri, Jan 27, 2017 at 9:01 PM, Cyril Bonté <cyril.bo...@free.fr>
>> wrote:
>>> Instead of using "unique-id-header" and temporary headers, you can
>>> use the
>>> "unique-id" fetch sample [1] :
>>>
>>> frontend public
>>> bind *:80
>>> unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
>>> default_backend ui
>>>
>>> backend ui
>>> http-request set-header X-Request-Id %[unique-id] unless {
>>> req.hdr(X-Request-Id) -m found }
>>
>>
>> Indeed this might be one version of ensuring that a `X-Request-Id`
>> exists, however it doesn't serve a second purpose
>
> And that's why I didn't reply to your anwser but to the original
> question ;-)
>

Something that might satisfy both requests, why not just append to the
existing request-id?

unique-id-format %[req.hdr(X-Request-ID)],%{+X}o\
%ci:%cp_%fi:%fp_%Ts_%rt:%pid

This does result in a leading comma if X-Request-ID is unset. If that's
unpleasant, you could do something like write tiny LUA sample converter
to append a comma if the value is not empty.

-Patrick


Re: unique-id-header and req.hdr

2017-01-27 Thread Patrick Hemmer


On 2017/1/27 15:31, Ciprian Dorin Craciun wrote:
> On Fri, Jan 27, 2017 at 10:24 PM, Patrick Hemmer
> <hapr...@stormcloud9.net> wrote:
>> Something that might satisfy both requests, why not just append to the
>> existing request-id?
>>
>> unique-id-format %[req.hdr(X-Request-ID)],%{+X}o\
>> %ci:%cp_%fi:%fp_%Ts_%rt:%pid
>>
>> This does result in a leading comma if X-Request-ID is unset. If that's
>> unpleasant, you could do something like write tiny LUA sample converter to
>> append a comma if the value is not empty.
>
> However, just setting the `unique-id-format` is not enough, as we
> should also send that ID to the backend, thus there is a need of
> `http-request set-header X-Request-Id %[unique-id] if !...`.  (By not
> using the `http-request`, we do get the ID from the header in the log,
> but not to the backend.)

That's what the `unique-id-header` config parameter is for.

>
> But now -- I can't say with certainty, but I remember trying various
> variants -- I think the evaluation order of `unique-id-format` is
> after all the `http-request` rules, thus the header will always be
> empty (if not explicitly set in the request), although in the log we
> would have a correct ID.
>
>
> (This is why I settled with a less optimal solution of having two
> headers, but with identical values, and working correctly in all
> instances.)
>
> Ciprian.
>



haproxy consuming 100% cpu - epoll loop

2017-01-16 Thread Patrick Hemmer
So on one of my local development machines haproxy started pegging the
CPU at 100%
`strace -T` on the process just shows:

...
epoll_wait(0, {}, 200, 0)   = 0 <0.03>
epoll_wait(0, {}, 200, 0)   = 0 <0.03>
epoll_wait(0, {}, 200, 0)   = 0 <0.03>
epoll_wait(0, {}, 200, 0)   = 0 <0.03>
epoll_wait(0, {}, 200, 0)   = 0 <0.03>
epoll_wait(0, {}, 200, 0)   = 0 <0.03>
...

Opening it up with gdb, the backtrace shows:

(gdb) bt
#0  0x7f4d18ba82a3 in __epoll_wait_nocancel () from /lib64/libc.so.6
#1  0x7f4d1a570ebc in _do_poll (p=, exp=-1440976915)
at src/ev_epoll.c:125
#2  0x7f4d1a4d3098 in run_poll_loop () at src/haproxy.c:1737
#3  0x7f4d1a4cf2c0 in main (argc=, argv=) at src/haproxy.c:2097

This is haproxy 1.7.0 on CentOS/7


Unfortunately I'm not sure what triggered it.

-Patrick


missing documentation on 51degrees samples

2016-10-07 Thread Patrick Hemmer
The documentation doesn't mention the sample fetcher `51d.all`, nor the
converter `51d.single`. The only place they're mentioned is the repo README.

Also the documentation for `51degrees-property-name-list` indicates it
takes an optional single string argument (`[]`), rather than
multiple string arguments (`...`). This led me to expect it was
comma delimited, which ended up not working.

-Patrick


format string fetch method?

2016-10-06 Thread Patrick Hemmer
While working with the `http-request set-var` (and a few other places,
but primarily here), it would be very useful to be able to use haproxy
format strings to define the variable.

For example
  http-request set-var(txn.foo) fmt(%ci:%cp:%Ts)
Or even
  http-request set-var(txn.foo) fmt(%ci:%cp:%Ts),crc32()

I don't currently see a way to do this, but I could be missing something.
If it's not possible, any chance of getting it added?

-Patrick


configure peer namespace

2016-10-09 Thread Patrick Hemmer
Can we get the ability to configure the peer namespace?
Right now haproxy uses the default namespace, but in our system we have
an "internal" interface which is able to talk to the other haproxy
nodes, and this interface is in another network namespace.

Additionally, the error output for failure to bind to a peer address is
lacking. I had to do an `strace` to figure out what the error was:
[ALERT] 282/214021 (2725) : [haproxy.main()] .
[ALERT] 282/214021 (2725) : [haproxy.main()] Some protocols failed to
start their listeners! Exiting.

That's on haproxy 1.6.9

Anyway, I can change the namespace that haproxy is launched in, and then
manually override the namespace for every `bind` and `server` parameter,
but it's rather cumbersome to do so. Would be much nicer to be able to
control the peer binding namespace, just like any other bind.

If this would be a simple change, I might be willing to attempt it. But
I've never worked in the haproxy source before, so not sure how involved
it would be.

Thanks

-Patrick


  1   2   3   >