Re: HA Proxy Install Guide

2013-08-13 Thread Baptiste
HAProxy can run without root rights, but some features won't be available.
IE, you won't be able to bind a port below 1024, transparent mode may
not work, performance tuning won't be able to be applied.

but all basic TCP and HTTP features should work without any issues at all.

Baptiste

On Mon, Aug 12, 2013 at 11:12 PM, Jonathan Matthews
cont...@jpluscplusm.com wrote:
 On 12 August 2013 21:15, Uttla, Rao rao.ut...@lmco.com wrote:
 Hi ,

 Can you please provide following info, Thanks in advance.

 Where can I find HA Proxy Install guide

 Hi Rao -

 Take a look at the resources linked to by http://haproxy.1wt.eu/. If
 you can suggest an improvement to it or to the documentation linked
 from it, please do make that suggestion here, on this public mailing
 list.

 also can I install HA Proxy without
 root userid.

 I would imagine that would work just fine with certain caveats. Let us
 know how you get on!

 Regards,
 Jonathan
 --
 Jonathan Matthews
 Oxford, London, UK
 http://www.jpluscplusm.com/contact.html




Re: RDP Session Broker Redirect Token

2013-08-13 Thread Mathew Levett
Just an update on this, it looks like there may be a small bug in the way
multiports work when used with RDP as if I specify the port on the real
servers as below it then works correctly.

listen TS-Farm
bind 192.168.75.38:3389
mode tcp
balance leastconn
persist rdp-cookie
server backup 127.0.0.1:9081 backup  non-stick
option tcpka
tcp-request inspect-delay 5s
tcp-request content accept if RDP_COOKIE
timeout client 12h
timeout server 12h
option redispatch
option abortonclose
maxconn 4
log global
option tcplog
server TS01 192.168.75.36:3389  weight 1  check   inter 2000  rise 2
fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions
server TS02 192.168.75.37:3389  weight 1  check   inter 2000  rise 2
fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions

It would appear that the when Session broker is in Use Token
Redirection mode you have to specify the RIP ports or you end up with
duplicate sessions.

Kind Regards,



On 12 August 2013 17:17, Mathew Levett mat...@loadbalancer.org wrote:

 Hi all,

 We seam to have an issue with haproxy where if we set our TS servers to
 use Use Token Redirection instead of Use IP Redirection (recommended)
 it does not work.

 The configuration I am using is a follows

 listen TS-Farm
   bind 192.168.75.38:3389
   mode tcp
   balance leastconn
   persist rdp-cookie
   server backup 127.0.0.1:9081 backup  non-stick
   option tcpka
   tcp-request inspect-delay 5s
   tcp-request content accept if RDP_COOKIE
   timeout client 12h
   timeout server 12h
   option redispatch
   option abortonclose
   maxconn 4
   log global
   option tcplog
   server TS01 192.168.75.36  weight 1  check port 3389  inter 2000  rise 
 2  fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions
   server TS02 192.168.75.37  weight 1  check port 3389  inter 2000  rise 
 2  fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions

 However what I have noticed in packet captures is there seams to be both a 
 mstshash=USERNAME first in the stream and then a msts=Encoded IP after

 It seams that 70% of the time a user is reconnected to his disconnected 
 session correctly but the other 30% of the time they end up on any one of

 the other servers.  I am wondering if haproxy is triggering on the mstshash 
 instead of the msts as that seams to be sent after the mstnshash.

 Any help would be greatly received.

 Kind Regards.




Re: Different check conditions for server selected from lb trees

2013-08-13 Thread Willy Tarreau
Hi Godbach,

On Wed, Aug 07, 2013 at 11:26:40AM +0800, Godbach wrote:
 From 370a74e89af0153a96ed8b7ebd4648258c89109e Mon Sep 17 00:00:00 2001
 From: Godbach nylzhao...@gmail.com
 Date: Wed, 7 Aug 2013 09:48:23 +0800
 Subject: [PATCH] BUG/MINOR: use the same check condition for server as other
  algorithms
(...)

patch applied, thank you!

Willy




Re: HAProxy v1.5-dev19 OpenSSL Support Issue

2013-08-13 Thread Willy Tarreau
Hey Scott,

On Sun, Aug 11, 2013 at 10:22:15AM +0200, Lukas Tribus wrote:
 Hi Scott,
 
  src/ssl_sock.c:796: error: âstruct checkâ has no member named 'xprt'
 
 Strange, I cannot reproduce this.
 
 
 
  Now if I edit src/ssl_sock.c line 796 and comment out
  'srv-check.xprt = ssl_sock;' and replace it with 'srv-xprt = ssl_sock;'
  HAProxy and OpenSSL compile correctly
 
 I'm not entirely sure what this exactly does, but I would imagine that this
 breaks health checks on ssl enabled backends.

Lukas is right here. Are you sure you didn't apply a patch or something
to your version ? Because clearly this cannot happen, there is a check
field in the server struct, and this check struct contains an xprt
field. So I don't see how the build can fail. Your change does not fix
the issue, it applies the check protocol to the nominal traffic instead
of applying it to the health checks.

Regards,
Willy




Re: HAProxy v1.5-dev19 OpenSSL Support Issue

2013-08-13 Thread Scott McKeown
Hi Guys,

I've not applied any patches to the download as this was a direct 'wget'
from the Git repository.
As follows is the OpenSSL v1.0.0 Centos 6.4 x64 build

[root@localhost ~]# wget
https://github.com/horms/haproxy/archive/agent-check-20130806.zip
[root@localhost ~]# unzip agent-check-20130806
[root@localhost ~]# cd haproxy-agent-check-20130806/
[root@localhost ~]# make TARGET=linux26 USE_STATIC_PCRE=1
USE_LINUX_TPROXY=1 USE_OPENSSL=1
[root@localhost haproxy-agent-check-20130806]# ./haproxy -vv
HA-Proxy version 1.5-dev19 2013/06/17
Copyright 2000-2013 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_TPROXY=1 USE_OPENSSL=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without zlib support (USE_ZLIB not set)
Compression algorithms supported : identity
Built with OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010
Running on OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_F   REEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


~Regards,
Scott



On 13 August 2013 15:23, Willy Tarreau w...@1wt.eu wrote:

 Hey Scott,

 On Sun, Aug 11, 2013 at 10:22:15AM +0200, Lukas Tribus wrote:
  Hi Scott,
 
   src/ssl_sock.c:796: error: āstruct checkā has no member named 'xprt'
 
  Strange, I cannot reproduce this.
 
 
 
   Now if I edit src/ssl_sock.c line 796 and comment out
   'srv-check.xprt = ssl_sock;' and replace it with 'srv-xprt =
 ssl_sock;'
   HAProxy and OpenSSL compile correctly
 
  I'm not entirely sure what this exactly does, but I would imagine that
 this
  breaks health checks on ssl enabled backends.

 Lukas is right here. Are you sure you didn't apply a patch or something
 to your version ? Because clearly this cannot happen, there is a check
 field in the server struct, and this check struct contains an xprt
 field. So I don't see how the build can fail. Your change does not fix
 the issue, it applies the check protocol to the nominal traffic instead
 of applying it to the health checks.

 Regards,
 Willy




-- 
With Kind Regards.

Scott McKeown
Loadbalancer.org
http://www.loadbalancer.org


RE: HAProxy v1.5-dev19 OpenSSL Support Issue

2013-08-13 Thread Lukas Tribus
Hi Scott,

 I've not applied any patches to the download as this was a direct 'wget'
 from the Git repository.

That is *not* haproxy's git repository. It looks to be a git repository
of Simon Horman.

I doubt this problem ever affected regular haproxy. You will have to
contact Simon, he is working with you guys at loadbalancer.org anyway.


please download haproxy builds here:
http://haproxy.1wt.eu/download/1.5/

or via git here:
http://git.1wt.eu/git/haproxy.git/



Cheers,
Lukas 


Re: Attempting to clear entries in stick table based on server id, results in all entries being dropped.

2013-08-13 Thread Willy Tarreau
Hi guys,

On Thu, Aug 08, 2013 at 08:48:53PM +0800, Godbach wrote:
 On 2013/8/8 18:50, Mark Brooks wrote:
 The issue I am seeing is that using the dev version of HAProxy
 1.5-dev19 git commit id 00f0084752eab236af80e61291d672e835790cff
 
 I have a source IP stick table and im trying to drop specific entries
 from it but its resulting in the whole table being dropped each time.

(...)

I got it, it's a stupid fix for a previous bug that was killing a bit
too much this time.

Here's the fix.

Best regards,
Willy

From 33fba6f78f2e9e9f1274bde10ac1cd86f2804d64 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Tue, 13 Aug 2013 16:44:40 +0200
Subject: BUG/MINOR: cli: clear table must not kill entries that don't match
 condition

Mark Brooks reported the following issue :

My table looks like this -

  0x24a8294: key=192.168.136.10 use=0 exp=1761492 server_id=3
  0x24a8344: key=192.168.136.11 use=0 exp=1761506 server_id=2
  0x24a83f4: key=192.168.136.12 use=0 exp=1761520 server_id=3
  0x24a84a4: key=192.168.136.13 use=0 exp=1761534 server_id=2
  0x24a8554: key=192.168.136.14 use=0 exp=1761548 server_id=3
  0x24a8604: key=192.168.136.15 use=0 exp=1761563 server_id=2
  0x24a86b4: key=192.168.136.16 use=0 exp=1761580 server_id=3
  0x24a8764: key=192.168.136.17 use=0 exp=1761592 server_id=2
  0x24a8814: key=192.168.136.18 use=0 exp=1761607 server_id=3
  0x24a88c4: key=192.168.136.19 use=0 exp=1761622 server_id=2
  0x24a8974: key=192.168.136.20 use=0 exp=1761636 server_id=3
  0x24a8a24: key=192.168.136.21 use=0 exp=1761649 server_id=2

im running the command -

  socat unix-connect:/var/run/haproxy.stat stdio  'clear table VIP_Name-2 
data.server_id eq 2'

Id assume that the entries with server_id = 2 would be removed but its
removing everything each time.

The cause of the issue is a missing test for skip_entry when deciding
whether to clear the key or not. The test was present when only the
last node is to be removed, so removing only the first node from a
list of two always did the right thing, explaining why it remained
unnoticed in basic unit tests.

The bug was introduced by commit 8fa52f4e which attempted to fix a
previous issue with this feature where only the last node was removed.

This bug is 1.5-specific and does not require any backport.
---
 src/dumpstats.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index 46066b5..8707e22 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -4170,7 +4170,7 @@ static int stats_table_request(struct stream_interface 
*si, int action)
si-applet.ctx.table.entry = ebmb_entry(eb, 
struct stksess, key);
if (show)

stksess_kill_if_expired(si-applet.ctx.table.proxy-table, old);
-   else
+   else if (!skip_entry  
!si-applet.ctx.table.entry-ref_cnt)

stksess_kill(si-applet.ctx.table.proxy-table, old);
si-applet.ctx.table.entry-ref_cnt++;
break;
-- 
1.7.12.2.21.g234cd45.dirty



Re: HAProxy v1.5-dev19 OpenSSL Support Issue

2013-08-13 Thread Willy Tarreau
On Tue, Aug 13, 2013 at 03:36:09PM +0100, Scott McKeown wrote:
 Hi Guys,
 
 I've not applied any patches to the download as this was a direct 'wget'
 from the Git repository.
 As follows is the OpenSSL v1.0.0 Centos 6.4 x64 build
 
 [root@localhost ~]# wget
 https://github.com/horms/haproxy/archive/agent-check-20130806.zip

Hmmm it looks to me that this is not the mainstream code, but Simon's tree.
Oh BTW that reminds me that I have still not reviewed his patch set, shame
on me :-(

Willy




Re: HAProxy v1.5-dev19 OpenSSL Support Issue

2013-08-13 Thread Scott McKeown
Whoops, I'll have another bash at getting this to work with the correct
branch then.

Sorry for the confusion to one and all.



On 13 August 2013 15:56, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Aug 13, 2013 at 03:36:09PM +0100, Scott McKeown wrote:
  Hi Guys,
 
  I've not applied any patches to the download as this was a direct 'wget'
  from the Git repository.
  As follows is the OpenSSL v1.0.0 Centos 6.4 x64 build
 
  [root@localhost ~]# wget
  https://github.com/horms/haproxy/archive/agent-check-20130806.zip

 Hmmm it looks to me that this is not the mainstream code, but Simon's tree.
 Oh BTW that reminds me that I have still not reviewed his patch set, shame
 on me :-(

 Willy




-- 
With Kind Regards.

Scott McKeown
Loadbalancer.org
http://www.loadbalancer.org


Re: [PATCH] BUG/MINOR: ssl_sock.c: use PATH_MAX only when defined

2013-08-13 Thread Willy Tarreau
On Mon, Aug 12, 2013 at 01:47:02PM +0300, Apollon Oikonomopoulos wrote:
  Anyway I'd prefer something simpler : let's define PATH_MAX in compat.h
  if it is not defined.
 
 Yes, that's probably a cleaner solution.

Finally I switched to MAXPATHLEN which is already handled in compat.h if
it is not defined.

Thanks!
Willy




Re: RDP Session Broker Redirect Token

2013-08-13 Thread Willy Tarreau
Hi Mathew,

On Tue, Aug 13, 2013 at 12:40:43PM +0100, Mathew Levett wrote:
 Just an update on this, it looks like there may be a small bug in the way
 multiports work when used with RDP as if I specify the port on the real
 servers as below it then works correctly.
 
 listen TS-Farm
   bind 192.168.75.38:3389
   mode tcp
   balance leastconn
   persist rdp-cookie
   server backup 127.0.0.1:9081 backup  non-stick
   option tcpka
   tcp-request inspect-delay 5s
   tcp-request content accept if RDP_COOKIE
   timeout client 12h
   timeout server 12h
   option redispatch
   option abortonclose
   maxconn 4
   log global
   option tcplog
   server TS01 192.168.75.36:3389  weight 1  check   inter 2000  rise 2
 fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions
   server TS02 192.168.75.37:3389  weight 1  check   inter 2000  rise 2
 fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions
 
 It would appear that the when Session broker is in Use Token
 Redirection mode you have to specify the RIP ports or you end up with
 duplicate sessions.

Hmmm good point. The RDP protocol transmits the port number in the cookie,
so it's a discriminant as well as the address. Thus, I think we should emit
a warning when persist rdp-cookie is used in a farm where at least one
server does not have an explicit port.

Finally I've just done it with the attached patch. Kudos for catching this,
I know how hard it can be sometimes to track long-session persistence issues!

Best regards,
Willy

From 82ffa39bfd34e5680cb65cc0b7ef625c0a274856 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Tue, 13 Aug 2013 17:19:08 +0200
Subject: MINOR: config: warn when a server with no specific port uses
 rdp-cookie

Mathew Levett reported an issue which is a bit nasty and hard to track
down. RDP cookies contain both the IP and the port, and haproxy matches
them exactly. So if a server has no port specified (or a remapped port),
it will never match a port specified in a cookie. Better warn the user
when this is detected.
---
 src/cfgparse.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index d51e1b6..41c1949 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -6885,6 +6885,12 @@ out_uri_auth_compat:
err_code |= ERR_WARN;
}
 
+   if ((newsrv-state  SRV_MAPPORTS)  
(curproxy-options2  PR_O2_RDPC_PRST)) {
+   Warning(config : %s '%s' : RDP cookie 
persistence will not work for server '%s' because it lacks an explicit port 
number.\n,
+   proxy_type_str(curproxy), curproxy-id, 
newsrv-id);
+   err_code |= ERR_WARN;
+   }
+
 #if defined(CONFIG_HAP_CTTPROXY) || defined(CONFIG_HAP_TRANSPARENT)
if (curproxy-mode != PR_MODE_HTTP  
newsrv-conn_src.bind_hdr_occ) {
newsrv-conn_src.bind_hdr_occ = 0;
-- 
1.7.12.2.21.g234cd45.dirty



Error 500 with -m Option

2013-08-13 Thread Schmitt, Christian
I've tried to setup haproxy on Ubuntu 12.04 LTS.

My HaProxy Version is:

root@node01:~# haproxy -v
HA-Proxy version 1.4.18 2011/09/16
Copyright 2000-2011 Willy Tarreau w...@1wt.eu

I've tried to do lot's of things to got it running, but I tried endlessy.

I only wanted to setup HAProxy which proxies Requests to my floating IP
that runs over Heartbeat. It was really simple, every backend Apache2,
Mysql and Elasticsearch has a bind IP and HAProxy has another Bind IP and
when one server drops, the floating IP will be given to another server,
voila i had High Availability and load Balancing. Currently I have a third
server in the background, but he only runs a smaller version of MySQL and
Elasticsearch to avoid split brain conditions. But since he is underpowered
he shouldn't be seen by any client.

But still I got my configuration working.

But then I had some strange behaviors.

My Backends always got some Strange 500 Errors.

I looked up everything, even debug modes of HAProxy, Mysql, Apache2 and
I've found nothing. I've checked Server consumption.

Via Landscape, top, free -m, everything was okai, I mean two Server with 48
GB of RAM and 24 Cores per Server + 2 HDDs for Apache2 RAID1 and 4 HDDs as
RAID10 for Mysql Galera Replication should be enough for serving 1000
Requests/s or even less. (way less).

Memory consumption was balanced at around 4 GB. And CPU Consumption...
0,01%. Even less.

Memory Consumption didn't exceed 8% on these two servers, so everything was
fine.

Except that still a shitload of my requests got killed, when running them
over HAProxy. On Ubuntu Haproxy coming with these settings:

# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=0
# Add extra flags here.
#EXTRAOPTS=-de -m 16

16 Megabyte, that seems to be really less, but it should be fine.

Howeever should...

Still that couldn't be the problem since I changed the value to this:

# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1
# Add extra flags here.
#EXTRAOPTS=-de -m 4096

So 4096 Megabyte of Memory should be more than enough. Even MySQL and / or
Apache2 had way less allocated. Still my requests got lost in the place of
nowhere.

Later I run haproxy via the command line to see a debug log, everything
worked. Everything. I removed-m and everything worked as expected.

Can somebody help me and saying what i've did wrong, did i've run into a
Bug?

Which bug? I mean there isn't even a Fix in newer versions.

Could I still turn on Memory Allocation for HAProxy?

I mean it isn't necessary that much since I think that I will never every
hit more than 50% of the memory cap.

Still it would be cool to control HAProxy a little bit more.


Re: TCP reject logging of request

2013-08-13 Thread Willy Tarreau
Hi,

On Mon, Aug 12, 2013 at 04:45:42PM +0200, Ghislain wrote:
 Le 05/08/2013 10:44, Baptiste a écrit :
 Hi Ghislain,
 
 To log such rejected connection please ensure you don't have the
 dontlognull option enabled and you're rejecting connections using
 the tcp-request content statement.
 
 Baptiste
 
 
 thanks for the hint ,i was using dontlognull so i just removed it and 
 added the no option in the frontend
 
 I use a simple thing like this:
 
 
 frontend ft_https
 mode tcp
 no option dontlognull
 option tcplog
 bind 0.0.0.0:443
 stick-table type ip size 500k expire 30s store 
 gpc0,http_req_rate(10s),conn_cur
 tcp-request connection track-sc1 src
 tcp-request connection reject if { src_get_gpc0 gt 0 } or { 
 src_conn_cur ge 30 }
 
 default_backend bk_https
 
 backend bk_https
 mode tcp
 balance roundrobin
 acl abuse src_http_req_rate(ft_https) ge 200
 acl flag_abuser src_inc_gpc0(ft_https)
 tcp-request content reject if abuse flag_abuser
 
  i cannot have any log for rejects, the same version in http mode gives 
 me log with the PR-- flag which is good as it indicate a reject because 
 of a deny rule but in TCP mode i am unable to get any logging of the 
 denied connections. I use a simple 'ab' call to stress it.

This is expected, you're rejecting at the earliest possible moment, where
no logs can be produced (tcp-request connection). If you want to get some
logs, reject a bit later, using tcp-request content. Note that it works
when you're in http mode because your backend's tcp-request content rule
probably matches at a lower rate than the frontend's rule. This rule however
does not match in TCP mode since there's no HTTP request.

Regards,
Willy




Re: haproxy duplicate http_request_counter values

2013-08-13 Thread Willy Tarreau
Hi Patrick,

On Sun, Aug 11, 2013 at 03:45:36PM -0400, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter (the
 full value is %{+X}o%pid-%rt), and am getting lots of duplicates. In
 one specific case, haproxy added the same http_request_counter value to
 70 different http requests within a span of 61 seconds (from various
 client hosts too). Does the http_request_counter only increment under
 certain conditions, or is this a bug?

Wow, congrats, you found a nice ugly bug! Here's how the counter is
retrieved at the moment of logging :

  iret = snprintf(tmplog, dst + maxsize - tmplog, %04X, global.req_count);

As you can see, it uses a global variable which holds the global number of
requests seen at the moment of logging (or assigning the header) instead of
a unique value assigned to each request!

So all the requests that are logged in the same time frame between two
new requests get the same ID :-(

The counter should be auto-incrementing so that each retrieval is unique.

Please try with the attached patch.

Thanks,
Willy

From 9f09521f2d2deacfb4b1b10b23eb5525b9941c62 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Tue, 13 Aug 2013 17:51:07 +0200
Subject: BUG/MEDIUM: unique_id: HTTP request counter must be unique!

The HTTP request counter is incremented non atomically, which means that
many requests can log the same ID. Let's increment it when it is consumed
so that we avoid this case.

This bug was reported by Patrick Hemmer. It's 1.5-specific and does not
need to be backported.
---
 include/types/global.h | 2 +-
 src/log.c  | 4 ++--
 src/proto_http.c   | 2 --
 3 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/include/types/global.h b/include/types/global.h
index 41cd67f..cfc3d23 100644
--- a/include/types/global.h
+++ b/include/types/global.h
@@ -90,7 +90,7 @@ struct global {
int rlimit_memmax;  /* default ulimit-d in megs value : 0=unset */
long maxzlibmem;/* max RAM for zlib in bytes */
int mode;
-   unsigned int req_count; /* HTTP request counter */
+   unsigned int req_count; /* HTTP request counter for logs and unique_id 
*/
int last_checks;
int spread_checks;
char *chroot;
diff --git a/src/log.c b/src/log.c
index 8f8fd8f..369dc34 100644
--- a/src/log.c
+++ b/src/log.c
@@ -1448,13 +1448,13 @@ int build_logline(struct session *s, char *dst, size_t 
maxsize, struct list *lis
 
case LOG_FMT_COUNTER: // %rt
if (tmp-options  LOG_OPT_HEXA) {
-   iret = snprintf(tmplog, dst + maxsize - 
tmplog, %04X, global.req_count);
+   iret = snprintf(tmplog, dst + maxsize - 
tmplog, %04X, global.req_count++);
if (iret  0 || iret  dst + maxsize - 
tmplog)
goto out;
last_isspace = 0;
tmplog += iret;
} else {
-   ret = ltoa_o(global.req_count, tmplog, 
dst + maxsize - tmplog);
+   ret = ltoa_o(global.req_count++, 
tmplog, dst + maxsize - tmplog);
if (ret == NULL)
goto out;
tmplog = ret;
diff --git a/src/proto_http.c b/src/proto_http.c
index 3ef6472..8d6eaf5 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -8289,8 +8289,6 @@ void http_init_txn(struct session *s)
txn-flags = 0;
txn-status = -1;
 
-   global.req_count++;
-
txn-cookie_first_date = 0;
txn-cookie_last_date = 0;
 
-- 
1.7.12.2.21.g234cd45.dirty



Re: haproxy duplicate http_request_counter values (BUG)

2013-08-13 Thread Patrick Hemmer

On 2013/08/11 15:45, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter
 (the full value is %{+X}o%pid-%rt), and am getting lots of
 duplicates. In one specific case, haproxy added the same
 http_request_counter value to 70 different http requests within a span
 of 61 seconds (from various client hosts too). Does the
 http_request_counter only increment under certain conditions, or is
 this a bug?

 This is with haproxy 1.5-dev19

 -Patrick


This appears to be part of a bug. I just experienced a scenario where
haproxy stopped responding. When I went into the log I found binary
garbage in place of the request ID. I have haproxy configured to route
certain URLs, and to respond with a `errorfile` when a request comes in
that doesn't match any of the configure paths. It seems whenever I
request an invalid URL and get the `errorfile` response, the request ID
gets screwed up and becomes jumbled binary data.

For example: haproxy[28645]: 207.178.167.185:49560 api bad_url/NOSRV
71/-1/-1/-1/71 3/3/0/0/3 0/0 127/242 403 PR-- Á + GET / HTTP/1.1
Notice the Á, that's supposed to be the process ID and request ID
separated by a hyphen. When I pipe it into xxd, I get this:

000: 6861 7072 6f78 795b 3238 3634 355d 3a20  haproxy[28645]:
010: 3230 372e 3137 382e 3136 372e 3138 353a  207.178.167.185:
020: 3439 3536 3020 6170 6920 6261 645f 7572  49560 api bad_ur
030: 6c2f 3c4e 4f53 5256 3e20 3731 2f2d 312f  l/NOSRV 71/-1/
040: 2d31 2f2d 312f 3731 2033 2f33 2f30 2f30  -1/-1/71 3/3/0/0
050: 2f33 2030 2f30 2031 3237 2f32 3432 2034  /3 0/0 127/242 4
060: 3033 2050 522d 2d20 90c1 8220 2b20 4745  03 PR-- ... + GE
070: 5420 2f20 4854 5450 2f31 2e31 0a T / HTTP/1.1.


I won't post my entire config as it's over 300 lines, but here's the
juicy stuff:


global
log 127.0.0.1   local0
maxconn 20480
user haproxy
group haproxy
daemon

defaults
log global
modehttp
option  httplog
option  dontlognull
retries 3
option  redispatch
timeout connect 5000
timeout client 6
timeout server 17
option  clitcpka
option  srvtcpka

stats   enable
stats   uri /haproxy/stats
stats   refresh 5
stats   auth my:secret

listen stats
bind 0.0.0.0:90
mode http
stats enable
stats uri /
stats refresh 5

frontend api
  bind *:80
  bind *:81 accept-proxy

  option httpclose
  option forwardfor
  http-request add-header X-Request-Timestamp %Ts.%ms
  unique-id-format %{+X}o%pid-%rt
  unique-id-header X-Request-Id
  rspadd X-Api-Host:\ i-a22932d9

  reqrep ^([^\ ]*)\ ([^\?\ ]*)(\?[^\ ]*)?\ HTTP.*  \0\r\nX-API-URL:\ \2


  acl is_1_1 path_dir /1/my/path
  use_backend 1_1 if is_1_1

  acl is_1_2 path_dir /1/my/other_path
  use_backend 1_2 if is_1_2

  ...

  default_backend bad_url

  log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
%ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r

backend bad_url
  block if TRUE
  errorfile 403 /etc/haproxy/bad_url.http


Re: haproxy duplicate http_request_counter values (BUG)

2013-08-13 Thread haproxy
Oh, for some reason my mail client wasn't showing the response from
Willy when I made this reply. Not sure if this info is really necessary
any more. Will try the patch on that email and report back to it.

-Patrick

On 08/13/2013 07:13 PM, Patrick Hemmer wrote:

 On 2013/08/11 15:45, Patrick Hemmer wrote:
 I'm using the %rt field in the unique-id-format config parameter
 (the full value is %{+X}o%pid-%rt), and am getting lots of
 duplicates. In one specific case, haproxy added the same
 http_request_counter value to 70 different http requests within a
 span of 61 seconds (from various client hosts too). Does the
 http_request_counter only increment under certain conditions, or is
 this a bug?

 This is with haproxy 1.5-dev19

 -Patrick


 This appears to be part of a bug. I just experienced a scenario where
 haproxy stopped responding. When I went into the log I found binary
 garbage in place of the request ID. I have haproxy configured to route
 certain URLs, and to respond with a `errorfile` when a request comes
 in that doesn't match any of the configure paths. It seems whenever I
 request an invalid URL and get the `errorfile` response, the request
 ID gets screwed up and becomes jumbled binary data.

 For example: haproxy[28645]: 207.178.167.185:49560 api bad_url/NOSRV
 71/-1/-1/-1/71 3/3/0/0/3 0/0 127/242 403 PR-- Á + GET / HTTP/1.1
 Notice the Á, that's supposed to be the process ID and request ID
 separated by a hyphen. When I pipe it into xxd, I get this:

 000: 6861 7072 6f78 795b 3238 3634 355d 3a20  haproxy[28645]:
 010: 3230 372e 3137 382e 3136 372e 3138 353a  207.178.167.185:
 020: 3439 3536 3020 6170 6920 6261 645f 7572  49560 api bad_ur
 030: 6c2f 3c4e 4f53 5256 3e20 3731 2f2d 312f  l/NOSRV 71/-1/
 040: 2d31 2f2d 312f 3731 2033 2f33 2f30 2f30  -1/-1/71 3/3/0/0
 050: 2f33 2030 2f30 2031 3237 2f32 3432 2034  /3 0/0 127/242 4
 060: 3033 2050 522d 2d20 90c1 8220 2b20 4745  03 PR-- ... + GE
 070: 5420 2f20 4854 5450 2f31 2e31 0a T / HTTP/1.1.


 I won't post my entire config as it's over 300 lines, but here's the
 juicy stuff:


 global
 log 127.0.0.1   local0
 maxconn 20480
 user haproxy
 group haproxy
 daemon

 defaults
 log global
 modehttp
 option  httplog
 option  dontlognull
 retries 3
 option  redispatch
 timeout connect 5000
 timeout client 6
 timeout server 17
 option  clitcpka
 option  srvtcpka

 stats   enable
 stats   uri /haproxy/stats
 stats   refresh 5
 stats   auth my:secret

 listen stats
 bind 0.0.0.0:90
 mode http
 stats enable
 stats uri /
 stats refresh 5

 frontend api
   bind *:80
   bind *:81 accept-proxy

   option httpclose
   option forwardfor
   http-request add-header X-Request-Timestamp %Ts.%ms
   unique-id-format %{+X}o%pid-%rt
   unique-id-header X-Request-Id
   rspadd X-Api-Host:\ i-a22932d9

   reqrep ^([^\ ]*)\ ([^\?\ ]*)(\?[^\ ]*)?\ HTTP.*  \0\r\nX-API-URL:\ \2


   acl is_1_1 path_dir /1/my/path
   use_backend 1_1 if is_1_1

   acl is_1_2 path_dir /1/my/other_path
   use_backend 1_2 if is_1_2

   ...

   default_backend bad_url

   log-format %ci:%cp\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\
 %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %U/%B\ %ST\ %tsc\ %ID\ +\ %r

 backend bad_url
   block if TRUE
   errorfile 403 /etc/haproxy/bad_url.http



Re: Attempting to clear entries in stick table based on server id, results in all entries being dropped.

2013-08-13 Thread Godbach

On 2013/8/13 22:54, Willy Tarreau wrote:

Hi guys,

On Thu, Aug 08, 2013 at 08:48:53PM +0800, Godbach wrote:

On 2013/8/8 18:50, Mark Brooks wrote:

The issue I am seeing is that using the dev version of HAProxy
1.5-dev19 git commit id 00f0084752eab236af80e61291d672e835790cff

I have a source IP stick table and im trying to drop specific entries

from it but its resulting in the whole table being dropped each time.


(...)

I got it, it's a stupid fix for a previous bug that was killing a bit
too much this time.

Here's the fix.

Best regards,
Willy



Hi Willy,

I have done a new test with this patch, it works well now.

Yes, just do the same test as last node for other nodes to be removed. I 
had tried to fixed it just by testing skip_entry but forgetten to test 
si-applet.ctx.table.entry-ref_cnt. And the test condition should have 
been found for the last node.


There is another issue I want to make sure. There are nodes to be 
deleted even in 'show table' action if expired as below:


eb = ebmb_next(si-applet.ctx.table.entry-key);
if (eb) {
struct stksess *old = si-applet.ctx.table.entry;
si-applet.ctx.table.entry = ebmb_entry(eb, struct stksess, key);

if (show)
stksess_kill_if_expired(si-applet.ctx.table.proxy-table, old);
else if (!skip_entry  !si-applet.ctx.table.entry-ref_cnt)
stksess_kill(si-applet.ctx.table.proxy-table, old);
si-applet.ctx.table.entry-ref_cnt++;
break;
}

If the expired nodes are not removed here, they can still be removed in 
expiration task by calling process_table_expire(). So the idea to remove 
expired nodes in 'show table' action can make process_table_expire() do 
less work.


--
Best Regards,
Godbach



Re: Attempting to clear entries in stick table based on server id, results in all entries being dropped.

2013-08-13 Thread Willy Tarreau
Hi Godbach,

On Wed, Aug 14, 2013 at 10:20:10AM +0800, Godbach wrote:
 I have done a new test with this patch, it works well now.

Thanks for testing.

 Yes, just do the same test as last node for other nodes to be removed. I 
 had tried to fixed it just by testing skip_entry but forgetten to test 
 si-applet.ctx.table.entry-ref_cnt. And the test condition should have 
 been found for the last node.

It was not obvious for me either, I found it only by single stepping in gdb !

 There is another issue I want to make sure. There are nodes to be 
 deleted even in 'show table' action if expired as below:
 
 eb = ebmb_next(si-applet.ctx.table.entry-key);
 if (eb) {
 struct stksess *old = si-applet.ctx.table.entry;
 si-applet.ctx.table.entry = ebmb_entry(eb, struct stksess, key);
 
 if (show)
 stksess_kill_if_expired(si-applet.ctx.table.proxy-table, old);
 else if (!skip_entry  !si-applet.ctx.table.entry-ref_cnt)
 stksess_kill(si-applet.ctx.table.proxy-table, old);
 si-applet.ctx.table.entry-ref_cnt++;
 break;
 }
 
 If the expired nodes are not removed here, they can still be removed in 
 expiration task by calling process_table_expire(). So the idea to remove 
 expired nodes in 'show table' action can make process_table_expire() do 
 less work.

I've seen this as well and had a hard time reminding why it was done
this way. I was sure it was needed but the cause was not obvious to me.
IIRC, the reason was that we want show table to report valid entry
counts, so if we don't kill the entries ourselves and there is low
activity, nothing else will kill them fast enough to have valid counts.
And as you say, it also reduces the work of process_table_expire(),
eventhough this is a very minor benefit since we're not supposed to
be dumping the stats all the day along :-)

Best regards,
Willy