Re: Help! HAProxy randomly failing health checks!
Has ELB changed its IP address??? Maybe you're checking a third party VM :) Baptiste
Re: Help! HAProxy randomly failing health checks!
On Wed, Mar 16, 2016 at 5:54 AM, Zachary Puncheswrote: > Hello! > > > > My name is Zack, and I have been in the middle of an on going HAProxy > issue that has me scratching my head. > > > > Here is the setup: > > > > Our setup is hosted by amazon, and our HAProxy (1.6.3) boxes are in each > region in 3 regions. We have 2 HAProxy boxes per region for a total of 6 > proxy boxes. > > > > These boxes are routed information through route 53. Their entire job is > to forward data from one of our clients to our database backend. It handles > this absolutely fine, except between the hours of 7pm PST and 7am PST. > During these hours, our route53 health checks time out thus causing the > traffic to switch to the other HAProxy box inside of the same region. > > > > During the other 12 hours of the day, we receive 0 alerts from our health > checks. > > > > I have noticed that we get a series of SSL handshake failures (though this > happens throughout the entire day) that causes the server to hang for a > second, thus causing the health checks to fail. During the day our SSL > failures do not cause the server to hang long enough to go fail the checks, > they only fail at night. I have attached my HAProxy config hoping that you > guys have an answer for me. Lemme know if you need any more info. > > > > I have done a few tcpdump captures during the SSL handshake failures (not > at night during it failing, but during the day when it still gets the SSL > handshake failures, but doesn’t fail the health check) and it seems there > is a d/c and a reconnect during the handshake. > > > > Here is my config, I will be running a tcpdump tonight to capture the > packets during the failure and will attach it if you guys need more info. > > > > #- > > # Example configuration for a possible web application. See the > > # full configuration options online. > > # > > # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt > > # > > #- > > > > #- > > # Global settings > > #- > > global > > log 127.0.0.1 local2 > > > > pidfile /var/run/haproxy.pid > > maxconn 3 > > userhaproxy > > group haproxy > > daemon > > ssl-default-bind-options no-sslv3 no-tls-tickets > > tune.ssl.default-dh-param 2048 > > > > # turn on stats unix socket > > #stats socket /var/lib/haproxy/stats` > > > > #- > > # common defaults that all the 'listen' and 'backend' sections will > > # use if not designated in their block > > #- > > defaults > > modehttp > > log global > > option httplog > > retries 3 > > timeout http-request5s > > timeout queue 1m > > timeout connect 31s > > timeout client 31s > > timeout server 31s > > maxconn 15000 > > > > # Stats > > statsenable > > stats uri /haproxy?stats > > stats realm Strictly\ Private > > stats auth$StatsUser:$StatsPass > > > > #- > > # main frontend which proxys to the backends > > #- > > > > frontend shared_incoming > > maxconn 15000 > > timeout http-request 5s > > > > #Bind ports of incoming traffic > > bind *:1025 accept-proxy # http > > bind *:1026 accept-proxy ssl crt /path/to/default/ssl/cert.pem ssl crt > /path/to/cert/folder/ # https > > bind *:1027 # Health checking port > > acl gs_texthtml url_reg \/gstext\.html## allow gs to do meta tag > verififcation > > acl gs_user_agent hdr_sub(User-Agent) -i globalsign## > allow gs to do meta tag verififcation > > > > # Add headers > > http-request set-header $Proxy-Header-Ip %[src] > > http-request set-header $Proxy-Header-Proto http if !{ ssl_fc } > > http-request set-header $Proxy-Header-Proto https if { ssl_fc } > > > > # Route traffic based on domain > > use_backend gs_verify if gs_texthtml or gs_user_agent## allow gs > meta tag verification > > use_backend > %[req.hdr(host),lower,map_dom(/path/to/map/file.map,unknown_domain)] > > > > # Drop unrecognized traffic > > default_backend unknown_domain > > > > #- > > # Backends > > #- > > > > backend
Re: Help! HAProxy randomly failing health checks!
Greetings, On 03/15/2016 02:54 PM, Zachary Punches wrote: Hello! My name is Zack, and I have been in the middle of an on going HAProxy issue that has me scratching my head. Here is the setup: Our setup is hosted by amazon, and our HAProxy (1.6.3) boxes are in each region in 3 regions. We have 2 HAProxy boxes per region for a total of 6 proxy boxes. These boxes are routed information through route 53. Their entire job is to forward data from one of our clients to our database backend. It handles this absolutely fine, except between the hours of 7pm PST and 7am PST. During these hours, our route53 health checks time out thus causing the traffic to switch to the other HAProxy box inside of the same region. During the other 12 hours of the day, we receive 0 alerts from our health checks. I have noticed that we get a series of SSL handshake failures (though this happens throughout the entire day) that causes the server to hang for a second, thus causing the health checks to fail. During the day our SSL failures do not cause the server to hang long enough to go fail the checks, they only fail at night. I have attached my HAProxy config hoping that you guys have an answer for me. Lemme know if you need any more info. Before thinking about less obvious potential causes, the CPU of the instance isn't close to getting capped out during the time in question? Also, are the connection counts under 15,000 (otherwise I could see it ending up with a timeout and trying again)? - Chad I have done a few tcpdump captures during the SSL handshake failures (not at night during it failing, but during the day when it still gets the SSL handshake failures, but doesn’t fail the health check) and it seems there is a d/c and a reconnect during the handshake. Here is my config, I will be running a tcpdump tonight to capture the packets during the failure and will attach it if you guys need more info. #- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #- #- # Global settings #- global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 3 userhaproxy group haproxy daemon ssl-default-bind-options no-sslv3 no-tls-tickets tune.ssl.default-dh-param 2048 # turn on stats unix socket #stats socket /var/lib/haproxy/stats` #- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #- defaults modehttp log global option httplog retries 3 timeout http-request5s timeout queue 1m timeout connect 31s timeout client 31s timeout server 31s maxconn 15000 # Stats stats enable stats uri /haproxy?stats stats realm Strictly\ Private stats auth $StatsUser:$StatsPass #- # main frontend which proxys to the backends #- frontend shared_incoming maxconn 15000 timeout http-request 5s #Bind ports of incoming traffic bind *:1025 accept-proxy # http bind *:1026 accept-proxy ssl crt /path/to/default/ssl/cert.pem ssl crt /path/to/cert/folder/ # https bind *:1027 # Health checking port acl gs_texthtml url_reg \/gstext\.html## allow gs to do meta tag verififcation acl gs_user_agent hdr_sub(User-Agent) -i globalsign## allow gs to do meta tag verififcation # Add headers http-request set-header $Proxy-Header-Ip %[src] http-request set-header $Proxy-Header-Proto http if !{ ssl_fc } http-request set-header $Proxy-Header-Proto https if { ssl_fc } # Route traffic based on domain use_backend gs_verify if gs_texthtml or gs_user_agent## allow gs meta tag verification use_backend %[req.hdr(host),lower,map_dom(/path/to/map/file.map,unknown_domain)] # Drop unrecognized traffic default_backend unknown_domain #- # Backends #- backend server0 ## added to allow gs ssl meta tag verification reqrep ^GET\ /.*\ (HTTP/.*)GET\ /GlobalSignVerification\ \1 server server0_http
Re: Asking for help: how to expire haproxy's stick table entry only after the closing of all sessions which used it
On 16/03/2016 12:27 AM, "Hugo Maia"wrote: > > Hi, my name is Hugo. > > I'm currently using Haproxy 1.5, I have a backend with 2 servers. My app servers receive connection from two clients and I want both of them to be attributed to the same server. All connections have a url parameter X and sessions that should be attributed to the same server have the same url parameter X value. I use a stick table to save the server that a particular url parameter value uses so that future connections can be attributed to the same server. > > I want to be able to add app servers as load increases. In order to instruct haproxy to move previous connections to the new app server I need to expire stick table entries when no session (of either client) is active in the server. > Isn't what a least connection balancer is for? I use that mode so when adding new backend server all new sessions will go to that one, including existing ones when they expire. > Can you help me with this? > > Thanks in advance for any kind of help. > > Best Regards, > > Hugo Maia
[PATCH v2] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
From 65198ff81545bd146511511eda534c699cb100b7 Mon Sep 17 00:00:00 2001 From: Benoit GARNIERDate: Sun, 27 Mar 2016 03:04:16 +0200 Subject: [PATCH] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted The strftime() function can call tzset() internally on some platforms. When haproxy is chrooted, the /etc/localtime file is not found, and some implementations will clobber the content of the current timezone. The GMT offset is computed by diffing the times returned by gmtime_r() and localtime_r(). These variants are guaranteed to not call tzset() and were already used in haproxy while chrooted, so they should be safe. This patch must be backported to 1.6 and 1.5. --- include/common/standard.h | 6 ++-- src/log.c | 4 +-- src/standard.c| 76 ++- 3 files changed, 62 insertions(+), 24 deletions(-) diff --git a/include/common/standard.h b/include/common/standard.h index 353d0b0..cd2208c 100644 --- a/include/common/standard.h +++ b/include/common/standard.h @@ -871,10 +871,11 @@ extern const char *monthname[]; char *date2str_log(char *dest, struct tm *tm, struct timeval *date, size_t size); /* Return the GMT offset for a specific local time. + * Both t and tm must represent the same time. * The string returned has the same format as returned by strftime(... "%z", tm). * Offsets are kept in an internal cache for better performances. */ -const char *get_gmt_offset(struct tm *tm); +const char *get_gmt_offset(time_t t, struct tm *tm); /* gmt2str_log: write a date in the format : * "%02d/%s/%04d:%02d:%02d:%02d +" without using snprintf @@ -885,10 +886,11 @@ char *gmt2str_log(char *dst, struct tm *tm, size_t size); /* localdate2str_log: write a date in the format : * "%02d/%s/%04d:%02d:%02d:%02d +(local timezone)" without using snprintf + * Both t and tm must represent the same time. * return a pointer to the last char written (\0) or * NULL if there isn't enough space. */ -char *localdate2str_log(char *dst, struct tm *tm, size_t size); +char *localdate2str_log(char *dst, time_t t, struct tm *tm, size_t size); /* These 3 functions parses date string and fills the * corresponding broken-down time in . In succes case, diff --git a/src/log.c b/src/log.c index ab38353..4d496cd 100644 --- a/src/log.c +++ b/src/log.c @@ -979,7 +979,7 @@ static char *update_log_hdr_rfc5424(const time_t time) tvsec = time; get_localtime(tvsec, ); - gmt_offset = get_gmt_offset(); + gmt_offset = get_gmt_offset(time, ); hdr_len = snprintf(logheader_rfc5424, global.max_syslog_len, ">1 %4d-%02d-%02dT%02d:%02d:%02d%.3s:%.2s %s ", @@ -1495,7 +1495,7 @@ int build_logline(struct stream *s, char *dst, size_t maxsize, struct list *list case LOG_FMT_DATELOCAL: // %Tl get_localtime(s->logs.accept_date.tv_sec, ); - ret = localdate2str_log(tmplog, , dst + maxsize - tmplog); + ret = localdate2str_log(tmplog, s->logs.accept_date.tv_sec, , dst + maxsize - tmplog); if (ret == NULL) goto out; tmplog = ret; diff --git a/src/standard.c b/src/standard.c index e08795f..2fe92ba 100644 --- a/src/standard.c +++ b/src/standard.c @@ -2552,31 +2552,66 @@ char *date2str_log(char *dst, struct tm *tm, struct timeval *date, size_t size) return dst; } +/* Base year used to compute leap years */ +#define TM_YEAR_BASE 1900 + +/* Return the difference in seconds between two times (leap seconds are ignored). + * Retrieved from glibc 2.18 source code. + */ +static int my_tm_diff(const struct tm *a, const struct tm *b) +{ + /* Compute intervening leap days correctly even if year is negative. +* Take care to avoid int overflow in leap day calculations, +* but it's OK to assume that A and B are close to each other. +*/ + int a4 = (a->tm_year >> 2) + (TM_YEAR_BASE >> 2) - ! (a->tm_year & 3); + int b4 = (b->tm_year >> 2) + (TM_YEAR_BASE >> 2) - ! (b->tm_year & 3); + int a100 = a4 / 25 - (a4 % 25 < 0); + int b100 = b4 / 25 - (b4 % 25 < 0); + int a400 = a100 >> 2; + int b400 = b100 >> 2; + int intervening_leap_days = (a4 - b4) - (a100 - b100) + (a400 - b400); + int years = a->tm_year - b->tm_year; + int days = (365 * years + intervening_leap_days ++ (a->tm_yday - b->tm_yday)); + return (60 * (60 * (24 * days + (a->tm_hour - b->tm_hour)) + + (a->tm_min - b->tm_min)) + + (a->tm_sec - b->tm_sec)); +} + /* Return the GMT offset for a specific local time. + * Both t and tm must represent the same time. * The string returned has the same
Re: [PATCH] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
On Tue, Mar 15, 2016 at 10:12:51PM +0100, Benoît GARNIER wrote: > Le 15/03/2016 21:59, Willy Tarreau a écrit : > > Nice to see that you never resign, you pulled out the machine gun :-) > > I'll trust you as I guess you've run a number of tests. The glibc being > > covered by LGPL, it should be fine in theory, except if you picked it > > from an LPGLv3 version which is not compatible with GPLv2 (it requires > > to upgrade to GPLv3). But since I'm seeing this code in glibc 2.18 which > > is still LGPLv2.1, that's fine. You should mention the glibc version you > > used to clear any doubt. > > I'll redo the patch with the glibc 2.18 code, but I'll need to redo all > my tests if there are any differences in the aforementioned code. Just diff it, visually it was exactly the same. > >> + sprintf(gmt_offset+1, "%02d%02d", (diff/60)%100, diff%60); > > Please use snprintf() instead. We completely got rid of sprintf() as it > > emits warnings on some platforms for being notoriously insecure and misused. > > I was extra careful to be sure to not overwrite the receiving buffer > (thus the modulus and the sign handling), but I didn't think about the > warnings. The only sprintf() we used to have were all pretty safe. Also, improper use of snprintf() can cause the same damage as sprintf(), but at least it limits the risks especially for quick changes performed later. So we declared that the warnings were easy to get rid of without adding any cost and that marked the end of sprintf(). Willy
Re: PATCH 1/1: OPTIM: args
On Tue, Mar 15, 2016 at 07:09:49PM +, David CARLIER wrote: > Hi and thanks. > Attached the patch with the related changes. Thanks, applied. I found that the macros were still using int64_t so I turned all int64_t to uint64_t to avoid any trouble, especially with the retrieval of the last argument which could possibly hit the sign bit and cause a negative type to be returned (I haven't counted but in theory that's possible). Thanks David, Willy
Re: [PATCH] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
Le 15/03/2016 21:59, Willy Tarreau a écrit : > Nice to see that you never resign, you pulled out the machine gun :-) > I'll trust you as I guess you've run a number of tests. The glibc being > covered by LGPL, it should be fine in theory, except if you picked it > from an LPGLv3 version which is not compatible with GPLv2 (it requires > to upgrade to GPLv3). But since I'm seeing this code in glibc 2.18 which > is still LGPLv2.1, that's fine. You should mention the glibc version you > used to clear any doubt. I'll redo the patch with the glibc 2.18 code, but I'll need to redo all my tests if there are any differences in the aforementioned code. > But I have two small requests below : > >> +static int tm_diff(const struct tm *a, const struct tm *b) > Please don't use the same name as glibc's and in general avoid too generic > names especially when their name suggests they apply to standard types; > over time we've been used to see name clashes on various systems and/or > libs. The simplest way is to prefix them with "my_" so that we know it's a > local implementation. That's how we have my_strndup() and a few others. Fair enough, I'll change the name. >> +sprintf(gmt_offset+1, "%02d%02d", (diff/60)%100, diff%60); > Please use snprintf() instead. We completely got rid of sprintf() as it > emits warnings on some platforms for being notoriously insecure and misused. I was extra careful to be sure to not overwrite the receiving buffer (thus the modulus and the sign handling), but I didn't think about the warnings. I'll amend the patch to use snprintf(). > Thanks! > Willy Benoit GARNIER
Re: There is kind of a spam issue on this ML no?
On Mon, Mar 14, 2016 at 09:26:21AM +0100, Baptiste wrote: > > We as users have the opportunity choose to leave this list. It's not an > > ideal solution but there isn't at the moment > > a good alternative to haproxy. So in my case I haven't to like, but I have > > to live with it. > > > Do you mean you could switch to an other product because of an amount > of spam on a mailling list purposely widely opened to everyone? Yes that's the part I loved the most in this thread :-) I have to go before shops close now, I must change my laptop because it shows me more spams than what I used to have on my previous one, so there must be a relation :-) Willy
Re: [PATCH] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
Hi Benoit, On Tue, Mar 15, 2016 at 09:21:17PM +0100, Benoît GARNIER wrote: > +/* Return the difference in seconds between two times (leap seconds are > ignored). > + * Taken from glibc source code. > + */ Nice to see that you never resign, you pulled out the machine gun :-) I'll trust you as I guess you've run a number of tests. The glibc being covered by LGPL, it should be fine in theory, except if you picked it from an LPGLv3 version which is not compatible with GPLv2 (it requires to upgrade to GPLv3). But since I'm seeing this code in glibc 2.18 which is still LGPLv2.1, that's fine. You should mention the glibc version you used to clear any doubt. But I have two small requests below : > +static int tm_diff(const struct tm *a, const struct tm *b) Please don't use the same name as glibc's and in general avoid too generic names especially when their name suggests they apply to standard types; over time we've been used to see name clashes on various systems and/or libs. The simplest way is to prefix them with "my_" so that we know it's a local implementation. That's how we have my_strndup() and a few others. > + sprintf(gmt_offset+1, "%02d%02d", (diff/60)%100, diff%60); Please use snprintf() instead. We completely got rid of sprintf() as it emits warnings on some platforms for being notoriously insecure and misused. Thanks! Willy
[ANNOUNCE] haproxy-1.3.28 (EOL)
Hi, HAProxy 1.3.28 was released on 2016/03/14. It added 15 new commits after version 1.3.27 which was released more than one year ago. Please note that this is the very last 1.3 release and that 1.3 is now marked end-of-life almost 10 years after its first release. There's nothing really interesting in this version, but since fixes were available it was better to issue a release so that users can benefit from them. There is no more reason for staying on 1.3 now, bugs which will be discovered in the future will not be fixed. If you need a rock-solid version and don't need SSL, 1.4 represents a seamless upgrade. If you need some time to validate a major upgrade, better spot 1.5 (1.6 still requires frequent updates). Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Sources : http://www.haproxy.org/download/1.3/src/ Git repository : http://git.haproxy.org/git/haproxy-1.3.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.3.git Changelog: http://www.haproxy.org/download/1.3/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Willy --- Complete changelog : - DOC: usesrc root privileges requirements - BUG/MINOR: http: remove stupid HTTP_METH_NONE entry - BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth) - CLEANUP: config: make the errorloc/errorfile messages less confusing - DOC: Address issue where documentation is excluded due to a gitignore rule. - CLEANUP: .gitignore: ignore more test files - CLEANUP: .gitignore: finally ignore everything but what is known. - CLEANUP: don't ignore debian/ directory if present - FIX: small typo in an example using the "Referer" header - BUG/MEDIUM: config: count memory limits on 64 bits, not 32 - BUG/MINOR: acl: don't use record layer in req_ssl_ver - BUILD: freebsd: double declaration - DOC: add server name at rate-limit sessions example - MINOR: cfgparse: warn when uid parameter is not a number - MINOR: cfgparse: warn when gid parameter is not a number ---
[ANNOUNCE] haproxy-1.4.27
Hi, HAProxy 1.4.27 was released on 2016/03/14. It added 29 new commits after version 1.4.26 that was released more than one year ago. This version mainly fixes a bug causing the process to crash when http-send-name-header is used if a number of conditions are met. The other visible change is that some protocol security checks have been backported to closely match the HTTP specification and limit the risk that haproxy passes mangled requests or responses that may affect devices vulnerable to smuggling attacks. The rest is pretty minor. Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Sources : http://www.haproxy.org/download/1.4/src/ Git repository : http://git.haproxy.org/git/haproxy-1.4.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.4.git Changelog: http://www.haproxy.org/download/1.4/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Willy --- Complete changelog : - DOC: Fix L4TOUT typo in documentation - BUG/MEDIUM: http: remove content-length from chunked messages - DOC: http: update the comments about the rules for determining transfer-length - BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to HTTP/1.1 - BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad request - BUG/MEDIUM: http: remove content-length form responses with bad transfer-encoding - MEDIUM: http: restrict the HTTP version token to 1 digit as per RFC7230 - BUG/MINOR: cfgparse: fix typo in 'option httplog' error message - DOC: usesrc root privileges requirements - DOC: typo in 'redirect', 302 code meaning - BUG/MINOR: http: remove stupid HTTP_METH_NONE entry - BUG/MAJOR: http: don't call http_send_name_header() after an error - CLEANUP: config: make the errorloc/errorfile messages less confusing - BUG/MINOR: config: check that tune.bufsize is always positive - BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth) - DOC: Address issue where documentation is excluded due to a gitignore rule. - CLEANUP: .gitignore: ignore more test files - CLEANUP: .gitignore: finally ignore everything but what is known. - CLEANUP: don't ignore debian/ directory if present - FIX: small typo in an example using the "Referer" header - BUG/MEDIUM: config: count memory limits on 64 bits, not 32 - BUG/MINOR: acl: don't use record layer in req_ssl_ver - BUG/MEDIUM: http: switch the request channel to no-delay once done. - BUILD: freebsd: double declaration - BUG/MINOR: chunk: make chunk_dup() always check and set dst->size - BUG/MEDIUM: config: Adding validation to stick-table expire value. - DOC: add server name at rate-limit sessions example - MINOR: cfgparse: warn when uid parameter is not a number - MINOR: cfgparse: warn when gid parameter is not a number ---
[ANNOUNCE] haproxy-1.5.16
Hi, HAProxy 1.5.16 was released on 2016/03/14. It added 47 new commits after version 1.5.15. There's nothing really outstanding here. The main visible fix probably is the one for a bug occasionally causing some missed timeout events on a connection with a pending close. This results in a busy poll loop until the timeout expires again. A lot of detailed information was provided by BaiYang which was critical in helping understanding the problem. A related issue is that some keep-alive requests can face a shutdown if the server closed during the idle timeout. Browsers normally don't notice this, but download tools or web services do. The remaining bugs are not very important, please see the changelog. Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Sources : http://www.haproxy.org/download/1.5/src/ Git repository : http://git.haproxy.org/git/haproxy-1.5.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.5.git Changelog: http://www.haproxy.org/download/1.5/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Willy --- Complete changelog : - BUG/BUILD: replace haproxy-systemd-wrapper with $(EXTRA) in install-bin. - BUG/MINOR: acl: don't use record layer in req_ssl_ver - BUG: http: do not abort keep-alive connections on server timeout - BUG/MEDIUM: http: switch the request channel to no-delay once done. - MINOR: config: extend the default max hostname length to 64 and beyond - BUG/MEDIUM: http: don't enable auto-close on the response side - BUG/MEDIUM: stream: fix half-closed timeout handling - BUG/MEDIUM: cli: changing compression rate-limiting must require admin level - BUILD: freebsd: double declaration - BUG/MEDIUM: sample: urlp can't match an empty value - BUG/MEDIUM: peers: table entries learned from a remote are pushed to others after a random delay. - BUG/MEDIUM: peers: old stick table updates could be repushed. - CLEANUP: haproxy: using _GNU_SOURCE instead of __USE_GNU macro. - BUG/MINOR: chunk: make chunk_dup() always check and set dst->size - MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero - MINOR: chunks: add chunk_strcat() and chunk_newstr() - MINOR: chunk: make chunk_initstr() take a const string - BUG/MEDIUM: config: Adding validation to stick-table expire value. - BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week - BUG/MEDIUM: channel: fix miscalculation of available buffer space. - BUG/MINOR: stream: don't force retries if the server is DOWN - MINOR: unix: don't mention free ports on EAGAIN - BUG/CLEANUP: CLI: report the proper field states in "show sess" - MINOR: stats: send content-length with the redirect to allow keep-alive - BUG: stream_interface: Reuse connection even if the output channel is empty - DOC: remove old tunnel mode assumptions - DOC: add server name at rate-limit sessions example - BUG/MEDIUM: ssl: fix off-by-one in ALPN list allocation - BUG/MEDIUM: ssl: fix off-by-one in NPN list allocation - BUG/MEDIUM: stats: stats bind-process doesn't propagate the process mask correctly - BUG/MINOR: http: Be sure to process all the data received from a server - BUG/MEDIUM: chunks: always reject negative-length chunks - BUG/MINOR: systemd: ensure we don't miss signals - BUG/MINOR: systemd: report the correct signal in debug message output - BUG/MINOR: systemd: propagate the correct signal to haproxy - MINOR: systemd: ensure a reload doesn't mask a stop - CLEANUP: stats: Avoid computation with uninitialized bits. - CLEANUP: pattern: Ignore unknown samples in pat_match_ip(). - CLEANUP: map: Avoid memory leak in out-of-memory condition. - BUG/MINOR: tcpcheck: conf parsing error when no port configured on server and last rule is a CONNECT with no port - BUG/MINOR: tcpcheck: fix incorrect list usage resulting in failure to load certain configs - MINOR: cfgparse: warn when uid parameter is not a number - MINOR: cfgparse: warn when gid parameter is not a number - BUG/MINOR: standard: Avoid free of non-allocated pointer - BUG/MINOR: pattern: Avoid memory leak on out-of-memory condition - CLEANUP: http: fix a build warning introduced by a recent fix - BUG/MINOR: log: GMT offset not updated when entering/leaving DST ---
[ANNOUNCE] haproxy-1.6.4
Hi, HAProxy 1.6.4 was released on 2016/03/14. It added 73 new commits after version 1.6.3. First, a handful of severe bugs were fixed and some of them also affect older versions. One could cause a server's port to be overwritten upon reload when both the DNS and server-state-file are used. Another one could cause some connections to freeze and remain orphaned when an idle timeout stroke during http-reuse. It was more visible with maxconn. This bug was reported by Yves Lafon who provided lots of useful dumps and tried debugging code to help figure what was happening, so many thanks for this! Thierry fixed a major Lua bug preventing an applet from being woken up. Baptiste found that using L7 sample fetches in "tcp-request connection" rulesets would cause a nice segfault. It was even broader than that, any L5-7 fetch there can cause this, except the few that still had the double check. Finally, the variables could also crash the process if improperly used (eg: using session variables at the connection level). Last, I remember there were several people reporting the systemd-wrapper was leaving some processes behind. I found a few race conditions that I fixed. I hope it will be enough. The remaining bugs are numerous but not critical, see the changelog. While not a bug, the default mailer timeout change was backported from 1.7 to ensure more reliable operation (changed to 10 seconds), as well as the change disabling the connection retry on a server already marked down, and the content-length header that's now added in the stats response to avoid a risk of truncated responses and permit client-side keep-alive. The other changes are just here to support the respective fixes. Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Sources : http://www.haproxy.org/download/1.6/src/ Git repository : http://git.haproxy.org/git/haproxy-1.6.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.6.git Changelog: http://www.haproxy.org/download/1.6/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Willy --- Complete changelog : - BUG/MINOR: http: fix several off-by-one errors in the url_param parser - BUG/MINOR: http: Be sure to process all the data received from a server - BUG/MINOR: chunk: make chunk_dup() always check and set dst->size - MINOR: chunks: ensure that chunk_strcpy() adds a trailing zero - MINOR: chunks: add chunk_strcat() and chunk_newstr() - MINOR: chunk: make chunk_initstr() take a const string - MINOR: lru: new function to delete least recently used keys - DOC: add Ben Shillito as the maintainer of 51d - BUG/MINOR: 51d: Ensures a unique domain for each configuration - BUG/MINOR: 51d: Aligns Pattern cache implementation with HAProxy best practices. - BUG/MINOR: 51d: Releases workset back to pool. - BUG/MINOR: 51d: Aligned const pointers to changes in 51Degrees. - CLEANUP: 51d: Aligned if statements with HAProxy best practices and removed casts from malloc. - DOC: fix a few spelling mistakes (cherry picked from commit cc123c66c2075add8524a6a9925382927daa6ab0) - DOC: fix "workaround" spelling - BUG/MINOR: examples: Fixing haproxy.spec to remove references to .cfg files - MINOR: fix the return type for dns_response_get_query_id() function - MINOR: server state: missing LF (\n) on error message printed when parsing server state file - BUG/MEDIUM: dns: no DNS resolution happens if no ports provided to the nameserver - BUG/MAJOR: servers state: server port is erased when dns resolution is enabled on a server - BUG/MEDIUM: servers state: server port is used uninitialized - BUG/MEDIUM: config: Adding validation to stick-table expire value. - BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week - BUG/MEDIUM: channel: fix miscalculation of available buffer space. - MEDIUM: pools: add a new flag to avoid rounding pool size up - BUG/MEDIUM: buffers: do not round up buffer size during allocation - BUG/MINOR: stream: don't force retries if the server is DOWN - BUG/MINOR: counters: make the sc-inc-gpc0 and sc-set-gpt0 touch the table - MINOR: unix: don't mention free ports on EAGAIN - BUG/CLEANUP: CLI: report the proper field states in "show sess" - MINOR: stats: send content-length with the redirect to allow keep-alive - BUG: stream_interface: Reuse connection even if the output channel is empty - DOC: remove old tunnel mode assumptions - BUG/MAJOR: http-reuse: fix risk of orphaned connections - BUG/MEDIUM: http-reuse: do not share private connections across backends - BUG/MINOR: ssl: Be sure to use unique serial for regenerated certificates - BUG/MINOR: stats: fix missing comma in stats on agent drain - BUG/MINOR: lua: unsafe initialization - DOC: lua: fix somme errors - DOC: add server name at rate-limit sessions example - BUG/MEDIUM: ssl: fix off-by-one in ALPN
[ANNOUNCE] haproxy-1.7-dev2
Hi, HAProxy 1.7-dev2 was released on 2016/03/14. It added 172 new commits after version 1.7-dev1. There are quite some news in this version, which is expected after almost 3 months. This time all maintained versions were released (down to 1.3), don't be surprized I'm copy-pasting some parts of the relevant changes in each announce e-mail. First, a handful of severe bugs were fixed and some of them also affect older versions. One could cause a server's port to be overwritten upon reload when both the DNS and server-state-file are used. Another one could cause some connections to freeze and remain orphaned when an idle timeout stroke during http-reuse. It was more visible with maxconn. This bug was reported by Yves Lafon who provided lots of useful dumps and tried debugging code to help figure what was happening, so many thanks for this! Thierry fixed two major Lua bugs, one causing a segfault and another one preventing an applet from being woken up. Baptiste found that using L7 sample fetches in "tcp-request connection" rulesets would cause a nice segfault. It was even broader than that, any L5-7 fetch there can cause this, except the few that still had the double check. 1.6 is also affected, so the lack of prior report simply means that people now write clean configurations and get rid of warnings! Finally, the variables could also crash the process if improperly used (eg: using session variables at the connection level). Last, I remember there were several people reporting the systemd-wrapper was leaving some processes behind. I found a few race conditions that I fixed. I hope it will be enough. The remaining bugs are numerous but not critical, see the changelog. The most visible change to developers is the introduction of "filters" by Christopher Faulet. The principle is to place some hooks around the existing analysers and at a few extra places to call various content processing functions. This adds a lot of flexibility to the processing and the first immediate gain is the move of the compression code out of the HTTP forwarding engine, which made it possible to factor out the request and response parsers (which were almost identical except for the compression). It also improved the compression performance a little bit. Another short-term benefit I'm seeing is that the traffic shaping I've been wanting to implement for many years will now be quite easy to implement using filters (it could even be done as an exercise). I'm seeing this as a first step towards a big simplification of the stream processor to possibly remove a number of the static analysers and only add them when needed. But that's another difficult change, so one thing at a time please :-) Another big change (but quite confined) concerns the stats. I've been irritated for a long time by the difficulty to add new output fields to the CSV dump and to see people parsing the HTML to get extra information. So I attacked the problem by fixing me as a goal that the HTML dump would only use the data available to the CSV and should not lose any information. It was painful but it's now the case. That made it much easier to create new fields and in the end we have many more data. And since by that time, Thierry was working on a stats aggregator, it occurred to me that the CSV format is not enough to aggregate stats between multiple processes, so I introduced a new output called "typed output format" which provides the type of each field as well as a series of qualifiers to know the nature, origin and scope of the information so that an aggragator knows how to aggregate fields it doesn't know about (eg: you don't aggregate a limit, a max, an uptime, a weight or a request rate similarly). I ensured it is now easy to add new fields and hard to emit HTML output without passing via these fields so there are chances that we won't miss anything anymore in the machine-readable output. It also means that people can start to look at the HTML generator and suggest improvements without having to have deep understanding of the internals, or even provide other backends to feed other types of interfaces. We recently had the opportunity to feel the pain systemd users are forced to live with. This thing doesn't even make it possible to use environment variables for a reload since it doesn't expect a new process to be started for a reload... So users believe updated variables have been used while they were just silently ignored. Just another product designed under a shower without any consideration for the real life :-/ So the only solution that was left to us was to make it possible to define environment variables from within haproxy itself. Yes it sounds stupid, but we have to be more stupid than the "service manager" below us to continue to operate sanely. So now in the global section it is possible to set/preset/unset environment variables. And since it is possible to have multiple global sections, it's perfectly possible to have all haproxy environment
[PATCH] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted
>From 6d89510559e6eeaa867555a76ca9fbe65b1a9a1c Mon Sep 17 00:00:00 2001 From: Benoit GARNIERDate: Sun, 27 Mar 2016 03:04:16 +0200 Subject: [PATCH] BUG/MINOR: log: Don't use strftime() which can clobber timezone if chrooted The strftime() function can call tzset() internally on some platforms. When haproxy is chrooted, the /etc/localtime file is not found, and some implementations will clobber the content of the current timezone. The GMT offset is computed by diffing the times returned by gmtime_r() and localtime_r(). These variants are guaranteed to not call tzset() and were already used in haproxy while chrooted so should be safe. This patch must be backported to 1.6 and 1.5. --- include/common/standard.h | 6 ++-- src/log.c | 4 +-- src/standard.c| 76 ++- 3 files changed, 62 insertions(+), 24 deletions(-) diff --git a/include/common/standard.h b/include/common/standard.h index 353d0b0..cd2208c 100644 --- a/include/common/standard.h +++ b/include/common/standard.h @@ -871,10 +871,11 @@ extern const char *monthname[]; char *date2str_log(char *dest, struct tm *tm, struct timeval *date, size_t size); /* Return the GMT offset for a specific local time. + * Both t and tm must represent the same time. * The string returned has the same format as returned by strftime(... "%z", tm). * Offsets are kept in an internal cache for better performances. */ -const char *get_gmt_offset(struct tm *tm); +const char *get_gmt_offset(time_t t, struct tm *tm); /* gmt2str_log: write a date in the format : * "%02d/%s/%04d:%02d:%02d:%02d +" without using snprintf @@ -885,10 +886,11 @@ char *gmt2str_log(char *dst, struct tm *tm, size_t size); /* localdate2str_log: write a date in the format : * "%02d/%s/%04d:%02d:%02d:%02d +(local timezone)" without using snprintf + * Both t and tm must represent the same time. * return a pointer to the last char written (\0) or * NULL if there isn't enough space. */ -char *localdate2str_log(char *dst, struct tm *tm, size_t size); +char *localdate2str_log(char *dst, time_t t, struct tm *tm, size_t size); /* These 3 functions parses date string and fills the * corresponding broken-down time in . In succes case, diff --git a/src/log.c b/src/log.c index ab38353..4d496cd 100644 --- a/src/log.c +++ b/src/log.c @@ -979,7 +979,7 @@ static char *update_log_hdr_rfc5424(const time_t time) tvsec = time; get_localtime(tvsec, ); - gmt_offset = get_gmt_offset(); + gmt_offset = get_gmt_offset(time, ); hdr_len = snprintf(logheader_rfc5424, global.max_syslog_len, ">1 %4d-%02d-%02dT%02d:%02d:%02d%.3s:%.2s %s ", @@ -1495,7 +1495,7 @@ int build_logline(struct stream *s, char *dst, size_t maxsize, struct list *list case LOG_FMT_DATELOCAL: // %Tl get_localtime(s->logs.accept_date.tv_sec, ); - ret = localdate2str_log(tmplog, , dst + maxsize - tmplog); + ret = localdate2str_log(tmplog, s->logs.accept_date.tv_sec, , dst + maxsize - tmplog); if (ret == NULL) goto out; tmplog = ret; diff --git a/src/standard.c b/src/standard.c index e08795f..0ec4900 100644 --- a/src/standard.c +++ b/src/standard.c @@ -2552,31 +2552,66 @@ char *date2str_log(char *dst, struct tm *tm, struct timeval *date, size_t size) return dst; } +/* Base year used to compute leap years */ +#define TM_YEAR_BASE 1900 + +/* Return the difference in seconds between two times (leap seconds are ignored). + * Taken from glibc source code. + */ +static int tm_diff(const struct tm *a, const struct tm *b) +{ + /* Compute intervening leap days correctly even if year is negative. +* Take care to avoid int overflow in leap day calculations, +* but it's OK to assume that A and B are close to each other. +*/ + int a4 = (a->tm_year >> 2) + (TM_YEAR_BASE >> 2) - ! (a->tm_year & 3); + int b4 = (b->tm_year >> 2) + (TM_YEAR_BASE >> 2) - ! (b->tm_year & 3); + int a100 = a4 / 25 - (a4 % 25 < 0); + int b100 = b4 / 25 - (b4 % 25 < 0); + int a400 = a100 >> 2; + int b400 = b100 >> 2; + int intervening_leap_days = (a4 - b4) - (a100 - b100) + (a400 - b400); + int years = a->tm_year - b->tm_year; + int days = (365 * years + intervening_leap_days + + (a->tm_yday - b->tm_yday)); + return (60 * (60 * (24 * days + (a->tm_hour - b->tm_hour)) + + (a->tm_min - b->tm_min)) + + (a->tm_sec - b->tm_sec)); +} + /* Return the GMT offset for a specific local time. + * Both t and tm must represent the same time. * The string returned has the same format as
Re: PATCH 1/1: OPTIM: args
Hi and thanks. Attached the patch with the related changes. > Great. I just spotted this, are you sure there isn't a mistake ? > > It is indeed :), my initial plan was (64 - ARGM_BITS) / sizeof(int) which is still wrong althought I assumed wrongly apparently the result represented all the bits included the ones not used. > > > I remember that we always reserve one entry for ARGT_STOP, so the max > usable number of args is 14 here. What do you think ? > > Just another detail here : > > - unsigned int arg_mask;/* arguments (ARG*()) */ > + int64_t arg_mask; /* arguments (ARG*()) */ > > It's nicer to leave it unsigned (uint64_t), as seeing negative numbers for > bitfields it really annoying when debugging. > > Ok no problems. > Thanks! > willy > > From bde853a0702e12447b48789ba8c6d456a7a287a2 Mon Sep 17 00:00:00 2001 From: David CarlierDate: Tue, 15 Mar 2016 19:00:35 + Subject: [PATCH] MINOR: sample: Moves ARGS underlying type from 32 to 64 bits. ARG# macros allow to create a list up to 7 in theory but 5 in practice. The change to a guaranteed 64 bits type increase to up to 12. --- include/proto/arg.h| 35 --- include/types/arg.h| 6 +++--- include/types/sample.h | 4 ++-- src/arg.c | 2 +- 4 files changed, 34 insertions(+), 13 deletions(-) diff --git a/include/proto/arg.h b/include/proto/arg.h index 91c1acd..0fe5472 100644 --- a/include/proto/arg.h +++ b/include/proto/arg.h @@ -31,22 +31,43 @@ * the number of mandatory arguments in a mask. */ #define ARGM(m) \ - (m & ARGM_MASK) + (int64_t)(m & ARGM_MASK) #define ARG1(m, t1) \ - (ARGM(m) + (ARGT_##t1 << (ARGM_BITS))) + (ARGM(m) + ((int64_t)ARGT_##t1 << (ARGM_BITS))) #define ARG2(m, t1, t2) \ - (ARG1(m, t1) + (ARGT_##t2 << (ARGM_BITS + ARGT_BITS))) + (ARG1(m, t1) + ((int64_t)ARGT_##t2 << (ARGM_BITS + ARGT_BITS))) #define ARG3(m, t1, t2, t3) \ - (ARG2(m, t1, t2) + (ARGT_##t3 << (ARGM_BITS + ARGT_BITS * 2))) + (ARG2(m, t1, t2) + ((int64_t)ARGT_##t3 << (ARGM_BITS + ARGT_BITS * 2))) #define ARG4(m, t1, t2, t3, t4) \ - (ARG3(m, t1, t2, t3) + (ARGT_##t4 << (ARGM_BITS + ARGT_BITS * 3))) + (ARG3(m, t1, t2, t3) + ((int64_t)ARGT_##t4 << (ARGM_BITS + ARGT_BITS * 3))) #define ARG5(m, t1, t2, t3, t4, t5) \ - (ARG4(m, t1, t2, t3, t4) + (ARGT_##t5 << (ARGM_BITS + ARGT_BITS * 4))) + (ARG4(m, t1, t2, t3, t4) + ((int64_t)ARGT_##t5 << (ARGM_BITS + ARGT_BITS * 4))) + +#define ARG6(m, t1, t2, t3, t4, t5, t6) \ + (ARG5(m, t1, t2, t3, t4, t5) + ((int64_t)ARGT_##t6 << (ARGM_BITS + ARGT_BITS * 5))) + +#define ARG7(m, t1, t2, t3, t4, t5, t6, t7) \ + (ARG6(m, t1, t2, t3, t4, t5, t6) + ((int64_t)ARGT_##t7 << (ARGM_BITS + ARGT_BITS * 6))) + +#define ARG8(m, t1, t2, t3, t4, t5, t6, t7, t8) \ + (ARG7(m, t1, t2, t3, t4, t5, t6, t7) + ((int64_t)ARGT_##t8 << (ARGM_BITS + ARGT_BITS * 7))) + +#define ARG9(m, t1, t2, t3, t4, t5, t6, t7, t8, t9) \ + (ARG8(m, t1, t2, t3, t4, t5, t6, t7, t8) + ((int64_t)ARGT_##t9 << (ARGM_BITS + ARGT_BITS * 8))) + +#define ARG10(m, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10) \ + (ARG9(m, t1, t2, t3, t4, t5, t6, t7, t8, t9) + ((int64_t)ARGT_##t10 << (ARGM_BITS + ARGT_BITS * 9))) + +#define ARG11(m, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11) \ + (ARG10(m, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10) + ((int64_t)ARGT_##t11 << (ARGM_BITS + ARGT_BITS * 10))) + +#define ARG12(m, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11, t12) \ + (ARG11(m, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11) + ((int64_t)ARGT_##t12 << (ARGM_BITS + ARGT_BITS * 11))) /* Mapping between argument number and literal description. */ extern const char *arg_type_names[]; @@ -58,7 +79,7 @@ extern struct arg empty_arg_list[ARGM_NBARGS]; struct arg_list *arg_list_clone(const struct arg_list *orig); struct arg_list *arg_list_add(struct arg_list *orig, struct arg *arg, int pos); -int make_arg_list(const char *in, int len, unsigned int mask, struct arg **argp, +int make_arg_list(const char *in, int len, int64_t mask, struct arg **argp, char **err_msg, const char **err_ptr, int *err_arg, struct arg_list *al); diff --git a/include/types/arg.h b/include/types/arg.h index 5430de7..d0dbf7a 100644 --- a/include/types/arg.h +++ b/include/types/arg.h @@ -35,12 +35,12 @@ #define ARGT_NBTYPES (1 << ARGT_BITS) #define ARGT_MASK (ARGT_NBTYPES - 1) -/* encoding of the arg count : up to 5 args are possible. 4 bits are left +/* encoding of the arg count : up to 12 args are possible. 4 bits are left * unused at the top. */ #define ARGM_MASK ((1 << ARGM_BITS) - 1) -#define ARGM_BITS 3 -#define ARGM_NBARGS(32 - ARGM_BITS) / sizeof(int) +#define ARGM_BITS 4 +#define ARGM_NBARGS(sizeof(int64_t) * 8 - ARGM_BITS) / ARGT_BITS enum { ARGT_STOP = 0, /* end of the arg list */ diff --git a/include/types/sample.h b/include/types/sample.h index 4a46be8..955f9bd 100644 ---
Help! HAProxy randomly failing health checks!
Hello! My name is Zack, and I have been in the middle of an on going HAProxy issue that has me scratching my head. Here is the setup: Our setup is hosted by amazon, and our HAProxy (1.6.3) boxes are in each region in 3 regions. We have 2 HAProxy boxes per region for a total of 6 proxy boxes. These boxes are routed information through route 53. Their entire job is to forward data from one of our clients to our database backend. It handles this absolutely fine, except between the hours of 7pm PST and 7am PST. During these hours, our route53 health checks time out thus causing the traffic to switch to the other HAProxy box inside of the same region. During the other 12 hours of the day, we receive 0 alerts from our health checks. I have noticed that we get a series of SSL handshake failures (though this happens throughout the entire day) that causes the server to hang for a second, thus causing the health checks to fail. During the day our SSL failures do not cause the server to hang long enough to go fail the checks, they only fail at night. I have attached my HAProxy config hoping that you guys have an answer for me. Lemme know if you need any more info. I have done a few tcpdump captures during the SSL handshake failures (not at night during it failing, but during the day when it still gets the SSL handshake failures, but doesn’t fail the health check) and it seems there is a d/c and a reconnect during the handshake. Here is my config, I will be running a tcpdump tonight to capture the packets during the failure and will attach it if you guys need more info. #- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #- #- # Global settings #- global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 3 userhaproxy group haproxy daemon ssl-default-bind-options no-sslv3 no-tls-tickets tune.ssl.default-dh-param 2048 # turn on stats unix socket #stats socket /var/lib/haproxy/stats` #- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #- defaults modehttp log global option httplog retries 3 timeout http-request5s timeout queue 1m timeout connect 31s timeout client 31s timeout server 31s maxconn 15000 # Stats statsenable stats uri /haproxy?stats stats realm Strictly\ Private stats auth$StatsUser:$StatsPass #- # main frontend which proxys to the backends #- frontend shared_incoming maxconn 15000 timeout http-request 5s #Bind ports of incoming traffic bind *:1025 accept-proxy # http bind *:1026 accept-proxy ssl crt /path/to/default/ssl/cert.pem ssl crt /path/to/cert/folder/ # https bind *:1027 # Health checking port acl gs_texthtml url_reg \/gstext\.html## allow gs to do meta tag verififcation acl gs_user_agent hdr_sub(User-Agent) -i globalsign## allow gs to do meta tag verififcation # Add headers http-request set-header $Proxy-Header-Ip %[src] http-request set-header $Proxy-Header-Proto http if !{ ssl_fc } http-request set-header $Proxy-Header-Proto https if { ssl_fc } # Route traffic based on domain use_backend gs_verify if gs_texthtml or gs_user_agent## allow gs meta tag verification use_backend %[req.hdr(host),lower,map_dom(/path/to/map/file.map,unknown_domain)] # Drop unrecognized traffic default_backend unknown_domain #- # Backends #- backend server0 ## added to allow gs ssl meta tag verification reqrep ^GET\ /.*\ (HTTP/.*)GET\ /GlobalSignVerification\ \1 server server0_http server0.domain.com:80/GlobalSignVerification/ backend server1 server server1_http server1.domain.net:80 backend server2 server server2_http server2.domain.net:80 backend server3 server server3_http server3.domain.net:80 backend server4 server server4_http server4.domain.net:80 backend server5
Re: PATCH 1/1: OPTIM: args
Hi David, On Mon, Mar 14, 2016 at 07:54:24PM +, David CARLIER wrote: > Hi again, > > There is the same changes made in one patch to make the review more > effective as the changes, finally, depend to each other. Great. I just spotted this, are you sure there isn't a mistake ? > -/* encoding of the arg count : up to 5 args are possible. 4 bits are left > +/* encoding of the arg count : up to 12 args are possible. 4 bits are left > * unused at the top. > */ > #define ARGM_MASK ((1 << ARGM_BITS) - 1) > -#define ARGM_BITS 3 > -#define ARGM_NBARGS(32 - ARGM_BITS) / sizeof(int) > +#define ARGM_BITS 4 > +#define ARGM_NBARGS(128- ARGM_BITS) / sizeof(int64_t) This 128 should very likely be 64 since we're moving to 64 bits. Also I'm having a hard time understanding why the number of bits is divided by the size of an int in either case. I guess it should have been something like this instead : (before) #define ARGM_BITS 3 #define ARGM_NBARGS(sizeof(int) * 8 - ARGM_BITS) / ARGT_BITS (after) #define ARGM_BITS 4 #define ARGM_NBARGS(sizeof(int64_t) * 8 - ARGM_BITS) / ARGT_BITS Indeed, we need to subtract the number of bits needed to count args from the whole int size, and divide it by the size of each individual arg to get the total number of args. The original version only happened to work by pure luck because sizeof(int) == ARGT_BITS! I think that deserves a fix in older versions eventhough it's a minor bug which cannot have any impact as-is. And in your case, that used to work as well because (128-4)/8 = 15.5 instead of 15.0, both of which are rounded down to 15. I remember that we always reserve one entry for ARGT_STOP, so the max usable number of args is 14 here. What do you think ? Just another detail here : - unsigned int arg_mask;/* arguments (ARG*()) */ + int64_t arg_mask; /* arguments (ARG*()) */ It's nicer to leave it unsigned (uint64_t), as seeing negative numbers for bitfields it really annoying when debugging. Thanks! willy
Dassault Systemes Directory
Hi, Would you be interested in Dassault Systemes Users You can approach the Information Security Decision Makers from Technology & Security industry. * Our leads database contains information like First and Last name, Phone number, Email Address, Company Name, Job Title, Address, City, State, Zip, SIC code/Industry, Revenue and Company Size. The leads can also be further customized as per requirements. * We provide 100% accurate data of the Companies who are using the mentioned security system: * Autodesk * Siemens PLM * PTC and many more... * We also have other Technology Users like - SAP, Oracle, Salesforce and Microsoft etc. * Please review and let me know if you are interested in any of the technology users or different contact list for your campaigns and I will provide more information for the same. * Target Criteria: * Target Titles: _ * Target Geography: Appreciate your time and look forward to hear from you. Thanks, Emma Megan If you are not the right person, feel free to forward this email to the right person in your organization.
Re: peers and stick-table stats
Greetings Pavlo, On 03/15/2016 05:23 AM, Pavlo Zhuk wrote: Hi, Is there any good way to monitor stick-table utilization? The first line of a "show table" socket command has the "size" field (showing the size as set in the config) and the "used" field (showing how many entries are currently used). For example running 'echo "show table " | socat stdio /var/run/haproxy.sock | head -n1' will print: # table: , type: , size:1048576, used:12331 Also searching for any nice stats on peer replication (connectivity, fails etc) I'm not aware of statistics for peer replication which can be extracted from HAProxy during runtime. Someone else might be able to fill in that point. - Chad It doesn't seems that stats endpoint return any of this info. Appreciate your feedback. thnx -- BR, Pavlo Zhuk
Re: Asking for help: how to expire haproxy's stick table entry only after the closing of all sessions which used it
Greetings Hugo, On 03/15/2016 09:25 AM, Hugo Maia wrote: Hi, my name is Hugo. I'm currently using Haproxy 1.5, I have a backend with 2 servers. My app servers receive connection from two clients and I want both of them to be attributed to the same server. All connections have a url parameter X and sessions that should be attributed to the same server have the same url parameter X value. I use a stick table to save the server that a particular url parameter value uses so that future connections can be attributed to the same server. I want to be able to add app servers as load increases. In order to instruct haproxy to move previous connections to the new app server I need to expire stick table entries when no session (of either client) is active in the server. Table values can be changed by sending "set table key data. " to the HAProxy socket ('socat stdio /var/run/haproxy.sock'). "table" is the name of the section the table is in, "key" is the key in the table, "type" is the datatype set in the type argument to stick table (integer/ip/string/etc), and "value" is what you want the value to be set to. I don't think there is a good way to delete/expire stick table entires via the socket, however (entirely possible there is and I just overlooked it once I latched onto changing values). So if I'm reading your problem correctly you should be able to have a script change the value it wanted to expire to the new server instead of removing it. - Chad Can you help me with this? Thanks in advance for any kind of help. Best Regards, Hugo Maia
Asking for help: how to expire haproxy's stick table entry only after the closing of all sessions which used it
Hi, my name is Hugo. I'm currently using Haproxy 1.5, I have a backend with 2 servers. My app servers receive connection from two clients and I want both of them to be attributed to the same server. All connections have a url parameter X and sessions that should be attributed to the same server have the same url parameter X value. I use a stick table to save the server that a particular url parameter value uses so that future connections can be attributed to the same server. I want to be able to add app servers as load increases. In order to instruct haproxy to move previous connections to the new app server I need to expire stick table entries when no session (of either client) is active in the server. Can you help me with this? Thanks in advance for any kind of help. Best Regards, Hugo Maia
peers and stick-table stats
Hi, Is there any good way to monitor stick-table utilization? Also searching for any nice stats on peer replication (connectivity, fails etc) It doesn't seems that stats endpoint return any of this info. Appreciate your feedback. thnx -- BR, Pavlo Zhuk