Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
This patch does appear to have solved the issue reported, but it
introduced another.
If I use `http-request add-header` with %rt in the value to add the
request ID, and then I also use it in `unique-id-format`, the 2 settings
get different values. the value used for`http-request add-header` will
be one less than the value used for `unique-id-format` (this applies to
both using %ID in the log format and using `unique-id-header`).

Without this patch, all values are the same.

-Patrick


*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2013-08-13 11:53:16 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 Hi Patrick,

 On Sun, Aug 11, 2013 at 03:45:36PM -0400, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter (the
 full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
 one specific case, haproxy added the same http_request_counter value to
 70 different http requests within a span of 61 seconds (from various
 client hosts too). Does the http_request_counter only increment under
 certain conditions, or is this a bug?
 Wow, congrats, you found a nice ugly bug! Here's how the counter is
 retrieved at the moment of logging :

   iret = snprintf(tmplog, dst + maxsize - tmplog, %04X, 
 global.req_count);

 As you can see, it uses a global variable which holds the global number of
 requests seen at the moment of logging (or assigning the header) instead of
 a unique value assigned to each request!

 So all the requests that are logged in the same time frame between two
 new requests get the same ID :-(

 The counter should be auto-incrementing so that each retrieval is unique.

 Please try with the attached patch.

 Thanks,
 Willy




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
Actually I sent that prematurely. The behavior is actually even simpler.
With `http-request add-header`, %rt is one less than when used in a
`log-format` or `unique-id-header`. I'm guessing incrementing the value
happens after `http-request` is processed, but before log-format or
unique-id-header.

-Patrick



*From: *Patrick Hemmer hapr...@stormcloud9.net
*Sent: * 2014-01-25 03:40:38 E
*To: *Willy Tarreau w...@1wt.eu
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 This patch does appear to have solved the issue reported, but it
 introduced another.
 If I use `http-request add-header` with %rt in the value to add the
 request ID, and then I also use it in `unique-id-format`, the 2
 settings get different values. the value used for`http-request
 add-header` will be one less than the value used for
 `unique-id-format` (this applies to both using %ID in the log format
 and using `unique-id-header`).

 Without this patch, all values are the same.

 -Patrick

 
 *From: *Willy Tarreau w...@1wt.eu
 *Sent: * 2013-08-13 11:53:16 E
 *To: *Patrick Hemmer hapr...@stormcloud9.net
 *CC: *haproxy@formilux.org haproxy@formilux.org
 *Subject: *Re: haproxy duplicate http_request_counter values

 Hi Patrick,

 On Sun, Aug 11, 2013 at 03:45:36PM -0400, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter (the
 full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
 one specific case, haproxy added the same http_request_counter value to
 70 different http requests within a span of 61 seconds (from various
 client hosts too). Does the http_request_counter only increment under
 certain conditions, or is this a bug?
 Wow, congrats, you found a nice ugly bug! Here's how the counter is
 retrieved at the moment of logging :

   iret = snprintf(tmplog, dst + maxsize - tmplog, %04X, 
 global.req_count);

 As you can see, it uses a global variable which holds the global number of
 requests seen at the moment of logging (or assigning the header) instead of
 a unique value assigned to each request!

 So all the requests that are logged in the same time frame between two
 new requests get the same ID :-(

 The counter should be auto-incrementing so that each retrieval is unique.

 Please try with the attached patch.

 Thanks,
 Willy





Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Willy Tarreau
Hi Patrick,

On Sat, Jan 25, 2014 at 03:40:38AM -0500, Patrick Hemmer wrote:
 This patch does appear to have solved the issue reported, but it
 introduced another.
 If I use `http-request add-header` with %rt in the value to add the
 request ID, and then I also use it in `unique-id-format`, the 2 settings
 get different values. the value used for`http-request add-header` will
 be one less than the value used for `unique-id-format` (this applies to
 both using %ID in the log format and using `unique-id-header`).

You're damn right! I forgot this case where the ID could be used twice :-(

So we have no other choice but copying the ID into the session or HTTP
transaction, since it's possible to use it several times. At the same
time, I'm wondering if we should not also increment it for new sessions,
because for people who forward non-HTTP traffic, there's no unique counter.

What I'm thinking about is the following then :

  - increment the global counter on each new session and store it into
the session.
  - increment it again when dealing with a new request over an existing
session.

That way it would count each transaction, either TCP connection or HTTP
request. And since the ID would be assigned to the session, it would
remain stable for all the period where it's needed.

What do you think ?

Willy




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Willy Tarreau
On Sat, Jan 25, 2014 at 10:43:28AM +0100, Willy Tarreau wrote:
 So we have no other choice but copying the ID into the session or HTTP
 transaction, since it's possible to use it several times. At the same
 time, I'm wondering if we should not also increment it for new sessions,
 because for people who forward non-HTTP traffic, there's no unique counter.
 
 What I'm thinking about is the following then :
 
   - increment the global counter on each new session and store it into
 the session.
   - increment it again when dealing with a new request over an existing
 session.
 
 That way it would count each transaction, either TCP connection or HTTP
 request. And since the ID would be assigned to the session, it would
 remain stable for all the period where it's needed.

And I'm realizing that it would also make it possible to match the IDs
reported in the show sess and show errors output with the logs and
headers transmitted.

Willy




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-01-25 04:43:28 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 Hi Patrick,

 On Sat, Jan 25, 2014 at 03:40:38AM -0500, Patrick Hemmer wrote:
 This patch does appear to have solved the issue reported, but it
 introduced another.
 If I use `http-request add-header` with %rt in the value to add the
 request ID, and then I also use it in `unique-id-format`, the 2 settings
 get different values. the value used for`http-request add-header` will
 be one less than the value used for `unique-id-format` (this applies to
 both using %ID in the log format and using `unique-id-header`).
 You're damn right! I forgot this case where the ID could be used twice :-(

 So we have no other choice but copying the ID into the session or HTTP
 transaction, since it's possible to use it several times. At the same
 time, I'm wondering if we should not also increment it for new sessions,
 because for people who forward non-HTTP traffic, there's no unique counter.

 What I'm thinking about is the following then :

   - increment the global counter on each new session and store it into
 the session.
   - increment it again when dealing with a new request over an existing
 session.

 That way it would count each transaction, either TCP connection or HTTP
 request. And since the ID would be assigned to the session, it would
 remain stable for all the period where it's needed.

 What do you think ?

Sounds reasonable. Running through it in my head, I can't conjure up any
scenario where that approach wouldn't work.


-Patrick




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Willy Tarreau
On Sat, Jan 25, 2014 at 05:05:07AM -0500, Patrick Hemmer wrote:
 Sounds reasonable. Running through it in my head, I can't conjure up any
 scenario where that approach wouldn't work.

Same here. And it works fine for me with the benefit of coherency
between all reported unique IDs.

I'm about to merge the attached patch, if you want to confirm that
it's OK for you as well, feel free to do so :-)

Willy

From 1f0da2485ea53b86a254be061ded69f5371d4a05 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Sat, 25 Jan 2014 11:01:50 +0100
Subject: BUG/MEDIUM: unique_id: HTTP request counter is not stable

Patrick Hemmer reported that using unique_id_format and logs did not
report the same unique ID counter since commit 9f09521 (BUG/MEDIUM:
unique_id: HTTP request counter must be unique!). This is because
the increment was done while producing the log message, so it was
performed twice.

A better solution consists in fetching a new value once per request
and saving it in the request or session context for all of this
request's life.

It happens that sessions already have a unique ID field which is used
for debugging and reporting errors, and which differs from the one
sent in logs and unique_id header.

So let's change this to reuse this field to have coherent IDs everywhere.
As of now, a session gets a new unique ID once it is instanciated. This
means that TCP sessions will also benefit from a unique ID that can be
logged. And this ID is renewed for each extra HTTP request received on
an existing session. Thus, all TCP sessions and HTTP requests will have
distinct IDs that will be stable along all their life, and coherent
between all places where they're used (logs, unique_id header,
show sess, show errors).

This feature is 1.5-specific, no backport to 1.4 is needed.
---
 doc/configuration.txt  | 2 +-
 include/types/global.h | 2 +-
 src/log.c  | 6 +++---
 src/proto_http.c   | 1 +
 src/session.c  | 2 +-
 5 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9831021..030a9a6 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -11438,7 +11438,7 @@ Please refer to the table below for currently defined 
variables :
   |   | %pid | PID   | numeric |
   | H | %r   | http_request  | string  |
   |   | %rc  | retries   | numeric |
-  | H | %rt  | http_request_counter  | numeric |
+  |   | %rt  | request_counter (HTTP req or TCP session) | numeric |
   |   | %s   | server_name   | string  |
   |   | %sc  | srv_conn (server concurrent connections)  | numeric |
   |   | %si  | server_IP   (target address)  | IP  |
diff --git a/include/types/global.h b/include/types/global.h
index cfc3d23..7d78d20 100644
--- a/include/types/global.h
+++ b/include/types/global.h
@@ -90,7 +90,7 @@ struct global {
int rlimit_memmax;  /* default ulimit-d in megs value : 0=unset */
long maxzlibmem;/* max RAM for zlib in bytes */
int mode;
-   unsigned int req_count; /* HTTP request counter for logs and unique_id 
*/
+   unsigned int req_count; /* request counter (HTTP or TCP session) for 
logs and unique_id */
int last_checks;
int spread_checks;
char *chroot;
diff --git a/src/log.c b/src/log.c
index f2ba621..2a6acf4 100644
--- a/src/log.c
+++ b/src/log.c
@@ -111,7 +111,7 @@ static const struct logformat_type logformat_keywords[] = {
{ pid, LOG_FMT_PID, PR_MODE_TCP, LW_INIT, NULL }, /* log pid */
{ r, LOG_FMT_REQ, PR_MODE_HTTP, LW_REQ, NULL },  /* request */
{ rc, LOG_FMT_RETRIES, PR_MODE_TCP, LW_BYTES, NULL },  /* retries */
-   { rt, LOG_FMT_COUNTER, PR_MODE_HTTP, LW_REQ, NULL }, /* HTTP request 
counter */
+   { rt, LOG_FMT_COUNTER, PR_MODE_TCP, LW_REQ, NULL }, /* request 
counter (HTTP or TCP session) */
{ s, LOG_FMT_SERVER, PR_MODE_TCP, LW_SVID, NULL },/* server */
{ sc, LOG_FMT_SRVCONN, PR_MODE_TCP, LW_BYTES, NULL },  /* srv_conn */
{ si, LOG_FMT_SERVERIP, PR_MODE_TCP, LW_SVIP, NULL }, /* server 
destination ip */
@@ -1512,13 +1512,13 @@ int build_logline(struct session *s, char *dst, size_t 
maxsize, struct list *lis
 
case LOG_FMT_COUNTER: // %rt
if (tmp-options  LOG_OPT_HEXA) {
-   iret = snprintf(tmplog, dst + maxsize - 
tmplog, %04X, global.req_count++);
+   iret = snprintf(tmplog, dst + maxsize - 
tmplog, %04X, s-uniq_id);
if (iret  0 || iret  dst + maxsize - 
tmplog)
goto out;
last_isspace = 0;
 

Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Patrick Hemmer
Confirmed. Testing various scenarios, and they all work.

Thanks for the quick patch :-)

-Patrick


*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-01-25 05:09:09 E
*To: *Patrick Hemmer hapr...@stormcloud9.net
*CC: *haproxy@formilux.org haproxy@formilux.org
*Subject: *Re: haproxy duplicate http_request_counter values

 On Sat, Jan 25, 2014 at 05:05:07AM -0500, Patrick Hemmer wrote:
 Sounds reasonable. Running through it in my head, I can't conjure up any
 scenario where that approach wouldn't work.
 Same here. And it works fine for me with the benefit of coherency
 between all reported unique IDs.

 I'm about to merge the attached patch, if you want to confirm that
 it's OK for you as well, feel free to do so :-)

 Willy




Re: haproxy duplicate http_request_counter values

2014-01-25 Thread Willy Tarreau
On Sat, Jan 25, 2014 at 03:14:00PM -0500, Patrick Hemmer wrote:
 Confirmed. Testing various scenarios, and they all work.
 
 Thanks for the quick patch :-)

Cool, thanks for the quick feedback :-)

Willy




Re: haproxy duplicate http_request_counter values (BUG)

2013-08-31 Thread Willy Tarreau
Hi guys,

On Wed, Aug 28, 2013 at 04:41:57PM +0200, William Lallemand wrote:
 On Tue, Aug 20, 2013 at 04:14:05PM -0400, Patrick Hemmer wrote:
  I see 2 ways of handling this.
  1) Move the code that populates the session unique_id member to
  http_process_req_common (or to http_wait_for_request where it's
  allocated). This will let requests terminated by an `errorfile`
  directive log out a request ID.
  2) Initialize the unique_id member upon allocation.
  
  I've attached a patch which does option 2, but I'm not sure if option 1
  would be preferable so that even `errorfile` requests will get a request ID.
  
  -Patrick
 
 Hello Patrick,
 
 Thanks for reporting the bug, I implemented something more relevant, the
 unique-id is now generated when a request failed.

Applied, thanks!

Willy




Re: haproxy duplicate http_request_counter values (BUG)

2013-08-28 Thread William Lallemand
On Tue, Aug 20, 2013 at 04:14:05PM -0400, Patrick Hemmer wrote:
 I see 2 ways of handling this.
 1) Move the code that populates the session unique_id member to
 http_process_req_common (or to http_wait_for_request where it's
 allocated). This will let requests terminated by an `errorfile`
 directive log out a request ID.
 2) Initialize the unique_id member upon allocation.
 
 I've attached a patch which does option 2, but I'm not sure if option 1
 would be preferable so that even `errorfile` requests will get a request ID.
 
 -Patrick

Hello Patrick,

Thanks for reporting the bug, I implemented something more relevant, the
unique-id is now generated when a request failed.

-- 
William Lallemand
From 6c2adb543c54df657e37836fc484a7f4e97ef7e1 Mon Sep 17 00:00:00 2001
From: William Lallemand wlallem...@exceliance.fr
Date: Wed, 28 Aug 2013 15:44:19 +0200
Subject: [PATCH] BUG/MEDIUM: unique_id: junk in log on empty unique_id

When a request fail, the unique_id was allocated but not generated.
The string was not initialized and junk was printed in the log with %ID.

This patch changes the behavior of the unique_id. The unique_id is now
generated when a request failed.

This bug was reported by Patrick Hemmer.
---
 src/log.c| 10 +-
 src/proto_http.c |  9 +
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/src/log.c b/src/log.c
index 369dc34..f1fe40c 100644
--- a/src/log.c
+++ b/src/log.c
@@ -1488,8 +1488,10 @@ int build_logline(struct session *s, char *dst, size_t maxsize, struct list *lis
 break;
 
 			case LOG_FMT_UNIQUEID: // %ID
+ret = NULL;
 src = s-unique_id;
-ret = lf_text(tmplog, src, maxsize - (tmplog - dst), tmp);
+if (src)
+	ret = lf_text(tmplog, src, maxsize - (tmplog - dst), tmp);
 if (ret == NULL)
 	goto out;
 tmplog = ret;
@@ -1541,6 +1543,12 @@ void sess_log(struct session *s)
 			level = LOG_ERR;
 	}
 
+	/* if unique-id was not generated */
+	if (!s-unique_id  !LIST_ISEMPTY(s-fe-format_unique_id)) {
+		if ((s-unique_id = pool_alloc2(pool2_uniqueid)) != NULL)
+			build_logline(s, s-unique_id, UNIQUEID_LEN, s-fe-format_unique_id);
+	}
+
 	tmplog = update_log_hdr();
 	size = tmplog - logline;
 	size += build_logline(s, tmplog, sizeof(logline) - size, s-fe-logformat);
diff --git a/src/proto_http.c b/src/proto_http.c
index 8d6eaf5..6ab2676 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -2635,9 +2635,6 @@ int http_wait_for_request(struct session *s, struct channel *req, int an_bit)
 		}
 	}
 
-	if (!LIST_ISEMPTY(s-fe-format_unique_id))
-		s-unique_id = pool_alloc2(pool2_uniqueid);
-
 	/* 4. We may have to convert HTTP/0.9 requests to HTTP/1.0 */
 	if (unlikely(msg-sl.rq.v_l == 0)  !http_upgrade_v09_to_v10(txn))
 		goto return_bad_req;
@@ -3950,8 +3947,12 @@ int http_process_request(struct session *s, struct channel *req, int an_bit)
 
 	/* add unique-id if header-unique-id is specified */
 
-	if (!LIST_ISEMPTY(s-fe-format_unique_id))
+	if (!LIST_ISEMPTY(s-fe-format_unique_id)) {
+		if ((s-unique_id = pool_alloc2(pool2_uniqueid)) == NULL)
+			goto return_bad_req;
+		s-unique_id[0] = '\0';
 		build_logline(s, s-unique_id, UNIQUEID_LEN, s-fe-format_unique_id);
+	}
 
 	if (s-fe-header_unique_id  s-unique_id) {
 		chunk_printf(trash, %s: %s, s-fe-header_unique_id, s-unique_id);
-- 
1.8.1.5



Re: haproxy duplicate http_request_counter values

2013-08-13 Thread Willy Tarreau
Hi Patrick,

On Sun, Aug 11, 2013 at 03:45:36PM -0400, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter (the
 full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
 one specific case, haproxy added the same http_request_counter value to
 70 different http requests within a span of 61 seconds (from various
 client hosts too). Does the http_request_counter only increment under
 certain conditions, or is this a bug?

Wow, congrats, you found a nice ugly bug! Here's how the counter is
retrieved at the moment of logging :

  iret = snprintf(tmplog, dst + maxsize - tmplog, %04X, global.req_count);

As you can see, it uses a global variable which holds the global number of
requests seen at the moment of logging (or assigning the header) instead of
a unique value assigned to each request!

So all the requests that are logged in the same time frame between two
new requests get the same ID :-(

The counter should be auto-incrementing so that each retrieval is unique.

Please try with the attached patch.

Thanks,
Willy

From 9f09521f2d2deacfb4b1b10b23eb5525b9941c62 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Tue, 13 Aug 2013 17:51:07 +0200
Subject: BUG/MEDIUM: unique_id: HTTP request counter must be unique!

The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.

This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
---
 include/types/global.h | 2 +-
 src/log.c  | 4 ++--
 src/proto_http.c   | 2 --
 3 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/include/types/global.h b/include/types/global.h
index 41cd67f..cfc3d23 100644
--- a/include/types/global.h
+++ b/include/types/global.h
@@ -90,7 +90,7 @@ struct global {
int rlimit_memmax;  /* default ulimit-d in megs value : 0=unset */
long maxzlibmem;/* max RAM for zlib in bytes */
int mode;
-   unsigned int req_count; /* HTTP request counter */
+   unsigned int req_count; /* HTTP request counter for logs and unique_id 
*/
int last_checks;
int spread_checks;
char *chroot;
diff --git a/src/log.c b/src/log.c
index 8f8fd8f..369dc34 100644
--- a/src/log.c
+++ b/src/log.c
@@ -1448,13 +1448,13 @@ int build_logline(struct session *s, char *dst, size_t 
maxsize, struct list *lis
 
case LOG_FMT_COUNTER: // %rt
if (tmp-options  LOG_OPT_HEXA) {
-   iret = snprintf(tmplog, dst + maxsize - 
tmplog, %04X, global.req_count);
+   iret = snprintf(tmplog, dst + maxsize - 
tmplog, %04X, global.req_count++);
if (iret  0 || iret  dst + maxsize - 
tmplog)
goto out;
last_isspace = 0;
tmplog += iret;
} else {
-   ret = ltoa_o(global.req_count, tmplog, 
dst + maxsize - tmplog);
+   ret = ltoa_o(global.req_count++, 
tmplog, dst + maxsize - tmplog);
if (ret == NULL)
goto out;
tmplog = ret;
diff --git a/src/proto_http.c b/src/proto_http.c
index 3ef6472..8d6eaf5 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -8289,8 +8289,6 @@ void http_init_txn(struct session *s)
txn-flags = 0;
txn-status = -1;
 
-   global.req_count++;
-
txn-cookie_first_date = 0;
txn-cookie_last_date = 0;
 
-- 
1.7.12.2.21.g234cd45.dirty



Re: haproxy duplicate http_request_counter values (BUG)

2013-08-13 Thread Patrick Hemmer

On 2013/08/11 15:45, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter
 (the full value is %{+X}o%pid-%rt), and am getting lots of
 duplicates. In one specific case, haproxy added the same
 http_request_counter value to 70 different http requests within a span
 of 61 seconds (from various client hosts too). Does the
 http_request_counter only increment under certain conditions, or is
 this a bug?

 This is with haproxy 1.5-dev19

 -Patrick


This appears to be part of a bug. I just experienced a scenario where
haproxy stopped responding. When I went into the log I found binary
garbage in place of the request ID. I have haproxy configured to route
certain URLs, and to respond with a `errorfile` when a request comes in
that doesn't match any of the configure paths. It seems whenever I
request an invalid URL and get the `errorfile` response, the request ID
gets screwed up and becomes jumbled binary data.

For example: haproxy[28645]: 207.178.167.185:49560 api bad_url/NOSRV
71/-1/-1/-1/71 3/3/0/0/3 0/0 127/242 403 PR-- Á + GET / HTTP/1.1
Notice the Á, that's supposed to be the process ID and request ID
separated by a hyphen. When I pipe it into xxd, I get this:

000: 6861 7072 6f78 795b 3238 3634 355d 3a20  haproxy[28645]:
010: 3230 372e 3137 382e 3136 372e 3138 353a  207.178.167.185:
020: 3439 3536 3020 6170 6920 6261 645f 7572  49560 api bad_ur
030: 6c2f 3c4e 4f53 5256 3e20 3731 2f2d 312f  l/NOSRV 71/-1/
040: 2d31 2f2d 312f 3731 2033 2f33 2f30 2f30  -1/-1/71 3/3/0/0
050: 2f33 2030 2f30 2031 3237 2f32 3432 2034  /3 0/0 127/242 4
060: 3033 2050 522d 2d20 90c1 8220 2b20 4745  03 PR-- ... + GE
070: 5420 2f20 4854 5450 2f31 2e31 0a T / HTTP/1.1.


I won't post my entire config as it's over 300 lines, but here's the
juicy stuff:


global
log 127.0.0.1   local0
maxconn 20480
user haproxy
group haproxy
daemon

defaults
log global
modehttp
option  httplog
option  dontlognull
retries 3
option  redispatch
timeout connect 5000
timeout client 6
timeout server 17
option  clitcpka
option  srvtcpka

stats   enable
stats   uri /haproxy/stats
stats   refresh 5
stats   auth my:secret

listen stats
bind 0.0.0.0:90
mode http
stats enable
stats uri /
stats refresh 5

frontend api
  bind *:80
  bind *:81 accept-proxy

  option httpclose
  option forwardfor
  http-request add-header X-Request-Timestamp %Ts.%ms
  unique-id-format %{+X}o%pid-%rt
  unique-id-header X-Request-Id
  rspadd X-Api-Host:\ i-a22932d9

  reqrep ^([^\ ]*)\ ([^\?\ ]*)(\?[^\ ]*)?\ HTTP.*  \0\r\nX-API-URL:\ \2


  acl is_1_1 path_dir /1/my/path
  use_backend 1_1 if is_1_1

  acl is_1_2 path_dir /1/my/other_path
  use_backend 1_2 if is_1_2

  ...

  default_backend bad_url

  log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r

backend bad_url
  block if TRUE
  errorfile 403 /etc/haproxy/bad_url.http


Re: haproxy duplicate http_request_counter values (BUG)

2013-08-13 Thread haproxy
Oh, for some reason my mail client wasn't showing the response from
Willy when I made this reply. Not sure if this info is really necessary
any more. Will try the patch on that email and report back to it.

-Patrick

On 08/13/2013 07:13 PM, Patrick Hemmer wrote:

 On 2013/08/11 15:45, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter
 (the full value is %{+X}o%pid-%rt), and am getting lots of
 duplicates. In one specific case, haproxy added the same
 http_request_counter value to 70 different http requests within a
 span of 61 seconds (from various client hosts too). Does the
 http_request_counter only increment under certain conditions, or is
 this a bug?

 This is with haproxy 1.5-dev19

 -Patrick


 This appears to be part of a bug. I just experienced a scenario where
 haproxy stopped responding. When I went into the log I found binary
 garbage in place of the request ID. I have haproxy configured to route
 certain URLs, and to respond with a `errorfile` when a request comes
 in that doesn't match any of the configure paths. It seems whenever I
 request an invalid URL and get the `errorfile` response, the request
 ID gets screwed up and becomes jumbled binary data.

 For example: haproxy[28645]: 207.178.167.185:49560 api bad_url/NOSRV
 71/-1/-1/-1/71 3/3/0/0/3 0/0 127/242 403 PR-- Á + GET / HTTP/1.1
 Notice the Á, that's supposed to be the process ID and request ID
 separated by a hyphen. When I pipe it into xxd, I get this:

 000: 6861 7072 6f78 795b 3238 3634 355d 3a20  haproxy[28645]:
 010: 3230 372e 3137 382e 3136 372e 3138 353a  207.178.167.185:
 020: 3439 3536 3020 6170 6920 6261 645f 7572  49560 api bad_ur
 030: 6c2f 3c4e 4f53 5256 3e20 3731 2f2d 312f  l/NOSRV 71/-1/
 040: 2d31 2f2d 312f 3731 2033 2f33 2f30 2f30  -1/-1/71 3/3/0/0
 050: 2f33 2030 2f30 2031 3237 2f32 3432 2034  /3 0/0 127/242 4
 060: 3033 2050 522d 2d20 90c1 8220 2b20 4745  03 PR-- ... + GE
 070: 5420 2f20 4854 5450 2f31 2e31 0a T / HTTP/1.1.


 I won't post my entire config as it's over 300 lines, but here's the
 juicy stuff:


 global
 log 127.0.0.1   local0
 maxconn 20480
 user haproxy
 group haproxy
 daemon

 defaults
 log global
 modehttp
 option  httplog
 option  dontlognull
 retries 3
 option  redispatch
 timeout connect 5000
 timeout client 6
 timeout server 17
 option  clitcpka
 option  srvtcpka

 stats   enable
 stats   uri /haproxy/stats
 stats   refresh 5
 stats   auth my:secret

 listen stats
 bind 0.0.0.0:90
 mode http
 stats enable
 stats uri /
 stats refresh 5

 frontend api
   bind *:80
   bind *:81 accept-proxy

   option httpclose
   option forwardfor
   http-request add-header X-Request-Timestamp %Ts.%ms
   unique-id-format %{+X}o%pid-%rt
   unique-id-header X-Request-Id
   rspadd X-Api-Host:\ i-a22932d9

   reqrep ^([^\ ]*)\ ([^\?\ ]*)(\?[^\ ]*)?\ HTTP.*  \0\r\nX-API-URL:\ \2


   acl is_1_1 path_dir /1/my/path
   use_backend 1_1 if is_1_1

   acl is_1_2 path_dir /1/my/other_path
   use_backend 1_2 if is_1_2

   ...

   default_backend bad_url

   log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
 %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r

 backend bad_url
   block if TRUE
   errorfile 403 /etc/haproxy/bad_url.http