[PATCH] BUILD: fix for non-transparent builds
Broke in dba9707713eb49a39b218f331c252fb09494c566. --- src/server.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/server.c b/src/server.c index 23343d86..536bf853 100644 --- a/src/server.c +++ b/src/server.c @@ -1742,12 +1742,14 @@ int parse_server(const char *file, int linenum, char **args, struct proxy *curpr } } } +#if defined(CONFIG_HAP_TRANSPARENT) if (curproxy->defsrv.conn_src.bind_hdr_name != NULL) { newsrv->conn_src.bind_hdr_name = strdup(curproxy->defsrv.conn_src.bind_hdr_name); newsrv->conn_src.bind_hdr_len = strlen(curproxy->defsrv.conn_src.bind_hdr_name); } newsrv->conn_src.bind_hdr_occ = curproxy->defsrv.conn_src.bind_hdr_occ; newsrv->conn_src.tproxy_addr = curproxy->defsrv.conn_src.tproxy_addr; +#endif if (curproxy->defsrv.conn_src.iface_name != NULL) newsrv->conn_src.iface_name = strdup(curproxy->defsrv.conn_src.iface_name); -- 2.11.1
[PATCH] DOC: stick-table is available in frontend sections
Fix the proxy keywords matrix to reflect that it's permitted to use stick-table in frontend sections. Signed-off-by: Adam Spiers --- doc/configuration.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 81b641e..05f0701 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -1981,7 +1981,7 @@ stick match - - X X stick on - - X X stick store-request - - X X stick store-response - - X X -stick-table - - X X +stick-table - X X X tcp-check connect - - X X tcp-check expect - - X X tcp-check send- - X X -- 2.1.2.330.g565301e
Re: [RFC][PATCHES] seamless reload
On Thu, Apr 06, 2017 at 04:56:47PM +0200, Pavlos Parissis wrote: > On 06/04/2017 04:25 μμ, Olivier Houchard wrote: > > Hi, > > > > The attached patchset is the first cut at an attempt to work around the > > linux issues with SOREUSEPORT that makes haproxy refuse a few new > > connections > > under heavy load. > > This works by transferring the existing sockets to the new process via the > > stats socket. A new command-line flag has been added, -x, that takes the > > path to the unix socket as an argument, and if set, will attempt to retrieve > > all the listening sockets; > > Right now, any error, either while connecting to the socket, or retrieving > > the file descriptors, is fatal, but maybe it'd be better to just fall back > > to the previous behavior instead of opening any missing socket ? I'm still > > undecided about that. > > > > Any testing, comments, etc would be greatly appreciated. > > > > Does this patch set support HAProxy in multiprocess mode (nbproc > 1) ? > Hi Pavlos, If it does not, it's a bug :) In my few tests, it seemed to work. Olivier
Re: [RFC][PATCHES] seamless reload
On 06/04/2017 04:25 μμ, Olivier Houchard wrote: > Hi, > > The attached patchset is the first cut at an attempt to work around the > linux issues with SOREUSEPORT that makes haproxy refuse a few new connections > under heavy load. > This works by transferring the existing sockets to the new process via the > stats socket. A new command-line flag has been added, -x, that takes the > path to the unix socket as an argument, and if set, will attempt to retrieve > all the listening sockets; > Right now, any error, either while connecting to the socket, or retrieving > the file descriptors, is fatal, but maybe it'd be better to just fall back > to the previous behavior instead of opening any missing socket ? I'm still > undecided about that. > > Any testing, comments, etc would be greatly appreciated. > Does this patch set support HAProxy in multiprocess mode (nbproc > 1) ? Cheers, Pavlos signature.asc Description: OpenPGP digital signature
[RFC][PATCHES] seamless reload
Hi, The attached patchset is the first cut at an attempt to work around the linux issues with SOREUSEPORT that makes haproxy refuse a few new connections under heavy load. This works by transferring the existing sockets to the new process via the stats socket. A new command-line flag has been added, -x, that takes the path to the unix socket as an argument, and if set, will attempt to retrieve all the listening sockets; Right now, any error, either while connecting to the socket, or retrieving the file descriptors, is fatal, but maybe it'd be better to just fall back to the previous behavior instead of opening any missing socket ? I'm still undecided about that. Any testing, comments, etc would be greatly appreciated. Regards, Olivier >From f2a13d1ce2f182170f70fe3d5312a538788f5877 Mon Sep 17 00:00:00 2001 From: Olivier Houchard Date: Wed, 5 Apr 2017 22:24:59 +0200 Subject: [PATCH 1/6] MINOR: cli: Add a command to send listening sockets. Add a new command that will send all the listening sockets, via the stats socket, and their properties. This is a first step to workaround the linux problem when reloading haproxy. --- include/types/connection.h | 8 ++ src/cli.c | 179 + 2 files changed, 187 insertions(+) diff --git a/include/types/connection.h b/include/types/connection.h index 5ce5e0c..9d1b51a 100644 --- a/include/types/connection.h +++ b/include/types/connection.h @@ -389,6 +389,14 @@ struct tlv_ssl { #define PP2_CLIENT_CERT_CONN 0x02 #define PP2_CLIENT_CERT_SESS 0x04 + +/* + * Linux seems to be able to send 253 fds per sendmsg(), not sure + * about the other OSes. + */ +/* Max number of file descriptors we send in one sendmsg() */ +#define MAX_SEND_FD 253 + #endif /* _TYPES_CONNECTION_H */ /* diff --git a/src/cli.c b/src/cli.c index fa45db9..37fb9fe 100644 --- a/src/cli.c +++ b/src/cli.c @@ -24,6 +24,8 @@ #include #include +#include + #include #include #include @@ -1013,6 +1015,182 @@ static int bind_parse_level(char **args, int cur_arg, struct proxy *px, struct b return 0; } +/* Send all the bound sockets, always returns 1 */ +static int _getsocks(char **args, struct appctx *appctx, void *private) +{ + char *cmsgbuf = NULL; + unsigned char *tmpbuf = NULL; + struct cmsghdr *cmsg; + struct stream_interface *si = appctx->owner; + struct connection *remote = objt_conn(si_opposite(si)->end); + struct msghdr msghdr; + struct iovec iov; + int *tmpfd; + int tot_fd_nb = 0; + struct proxy *px; + int i = 0; + int fd = remote->t.sock.fd; + int curoff = 0; + int old_fcntl; + int ret; + + /* Temporary set the FD in blocking mode, that will make our life easier */ + old_fcntl = fcntl(fd, F_GETFL); + if (old_fcntl < 0) { + Warning("Couldn't get the flags for the unix socket\n"); + goto out; + } + cmsgbuf = malloc(CMSG_SPACE(sizeof(int) * MAX_SEND_FD)); + if (!cmsgbuf) { + Warning("Failed to allocate memory to send sockets\n"); + goto out; + } + if (fcntl(fd, F_SETFL, old_fcntl &~ O_NONBLOCK) == -1) { + Warning("Cannot make the unix socket blocking\n"); + goto out; + } + iov.iov_base = &tot_fd_nb; + iov.iov_len = sizeof(tot_fd_nb); + if (!cli_has_level(appctx, ACCESS_LVL_ADMIN)) + goto out; + memset(&msghdr, 0, sizeof(msghdr)); + /* +* First, calculates the total number of FD, so that we can let +* the caller know how much he should expects. +*/ + px = proxy; + while (px) { + struct listener *l; + + list_for_each_entry(l, &px->conf.listeners, by_fe) { + /* Only transfer IPv4/IPv6 sockets */ + if (l->proto->sock_family == AF_INET || + l->proto->sock_family == AF_INET6) + tot_fd_nb++; + } + px = px->next; + } + if (tot_fd_nb == 0) + goto out; + + /* First send the total number of file descriptors, so that the +* receiving end knows what to expect. +*/ + msghdr.msg_iov = &iov; + msghdr.msg_iovlen = 1; + ret = sendmsg(fd, &msghdr, 0); + if (ret != sizeof(tot_fd_nb)) { + Warning("Failed to send the number of sockets to send\n"); + goto out; + } + + /* Now send the fds */ + msghdr.msg_control = cmsgbuf; + msghdr.msg_controllen = CMSG_SPACE(sizeof(int) * MAX_SEND_FD); + cmsg = CMSG_FIRSTHDR(&msghdr); + cmsg->cmsg_len = CMSG_LEN(MAX_SEND_FD * sizeof(int)); + cmsg->cmsg_level = SOL_SOCKET; + cmsg->cmsg_type = SCM_RIGHTS; + tmpfd = (int *)CMSG_DATA(cmsg); + + px = proxy; + /* For each socket, e mess
Re: HaProxy Hang
On Mon, 03 Apr 2017 12:45:57 -0400, Dave Cottlehuber wrote: On Mon, 13 Mar 2017, at 13:31, David King wrote: Hi All Apologies for the delay in response, i've been out of the country for the last week Mark, my gut feeling is that is network related in someway, so thought we could compare the networking setup of our systems You mentioned you see the hang across geo locations, so i assume there isn't layer 2 connectivity between all of the hosts? is there any back end connectivity between the haproxy hosts? Following up on this, some interesting points but nothing useful. - Mark & I see the hang at almost exactly the same time on the same day: 2017-02-27T14:36Z give or take a minute either way - I see the hang in 3 different regions using 2 different hosting providers on both clustered and non-clustered services, but all on FreeBSD 11.0R amd64. There is some dependency between these systems but nothing unusual (logging backends, reverse proxied services etc). - our servers don't have a specific workload that would allow them all to run out of some internal resource at the same time, as their reboot and patch cycles are reasonably different - typically a few days elapse between first patches and last reboots unless its deemed high risk - our networking setup is not complex but typical FreeBSD: - LACP bonded Gbit igb(4) NICs - CARP failover for both ipv4 & ipv6 addresses - either direct to haproxy for http & TLS traffic, or via spiped to decrypt intra-server traffic - haproxy directs traffic into jailed services - our overall load and throughput is low but consistent - pf firewall - rsyslog for logging, along with riemann and graphite for metrics - all our db traffic (couchdb, kyoto tycoon) and rabbitmq go via haproxy - haproxy 1.6.10 + libressl at the time As I'm not one for conspiracy theories or weird coincidences, somebody port scanning the internet with an Unexpectedly Evil Packet Combo seems the most plausible explanation. I cannot find an alternative that would fit the scenario of 3 different organisations with geographically distributed equipment and unconnected services reporting an unusual interruption on the same day and almost the same time. Since then, I've moved to FreeBSD 11.0p8, haproxy 1.7.3 and latest libressl and seen no recurrence, just like the last 8+ months or so since first deploying haproxy on FreeBSD instead of debian & nginx. If the issue recurs I plan to run a small cyclic traffic capture with tcpdump and wait for a re-repeat, see https://superuser.com/questions/286062/practical-tcpdump-examples Let me know if I can help or clarify further. A+ Dave Hi Dave, Thanks for keeping this thread going. As for the initial report with all servers hanging, I too run NTP (actually OpenNTPd), and these only speak to in-house stratum-2 servers. As a follow-up to my initial report, I upgraded to 1.7.3 shortly thereafter. I've had one re-occurrence of this "hang" but this time, it did not affect all of my servers, instead, it affected only 2 (the busier ones). If the theory about some timing event ( leap second, counter wrapping, etc.) is correct, perhaps it only affects processes actually accepting or handling a connection in a particular state at the time. I have not yet upgraded beyond 1.7.3. Best, -=Mark
Re: Certificate order
Hi Sander, On 2017-04-06 10:45, Sander Hoentjen wrote: Hi guys, We have a setup where we sometimes have multiple certificates for a domain. We use multiple directories for that and would like the following behavior: - Look in dir A for any match, use it if found - Look in dir B for any match, use it if found - Look in dir .. etc This works great, except for wildcards. Right now a domain match in dir B takes precedence over a wildcard match in dir A. Is there a way to get haproxy to behave the way I describe? I have been playing with this some time ago and my solution was to just think about the order of certificate loading. I then found out that the last certificate was preferred if it matched. Not sure if this has changed over time. Greets, Sander 0x2E78FBE8.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature
Re: ssl & default_backend
El 03/04/17 a las 19:12, PiBa-NL escribió: Hi Antonio, Op 3-4-2017 om 13:29 schreef Antonio Trujillo Carmona: It's well documented that Windows XP with Internet Explorer don't support sni, so I try to redirect call through "default_backend", but I got ERROR-404, it work fine with all other combination of OS/surfer. If I (only for test purpose) comment the four line with "ssiiprovincial" (witch mean all the traffic must be redirected through default_backend) it don't work with any OS/surfer. frontend Aplicaciones bind *:443 mode tcp log global tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } # Parametros para utilizar SNI (Server Name Indication) acl aplicaciones req_ssl_sni -i aplicaciones.gra.sas.junta-andalucia.es acl citrixsf req_ssl_sni -i ssiiprovincial.gra.sas.junta-andalucia.es acl citrixsf req_ssl_sni -i ssiiprovincial01.gra.sas.junta-andalucia.es acl citrixsf req_ssl_sni -i ssiiprovincial.hvn.sas.junta-andalucia.es acl citrixsf req_ssl_sni -i ssiiprovincial01.hvn.sas.junta-andalucia.es use_backend CitrixSF-SSL if citrixsf use_backend SevidoresWeblogic-12c-Balanceador-SSL There is no acl for the backend above? so probably the default_backend below will never be reached. Could it be the above backend returns the 404 your seeing? default_backend CitrixSF-SSL Regards, PiBa-NL You are right it's a mistake of make too much probe to get session affinity, in some one moment I eat "if aplicaciones". Thank. -- Antonio Trujillo Carmona Técnico de redes y sistemas. Subdirección de Tecnologías de la Información y Comunicaciones Servicio Andaluz de Salud. Consejería de Salud de la Junta de Andalucía antonio.trujillo.s...@juntadeandalucia.es Tel. +34 670947670 747670)
Certificate order
Hi guys, We have a setup where we sometimes have multiple certificates for a domain. We use multiple directories for that and would like the following behavior: - Look in dir A for any match, use it if found - Look in dir B for any match, use it if found - Look in dir .. etc This works great, except for wildcards. Right now a domain match in dir B takes precedence over a wildcard match in dir A. Is there a way to get haproxy to behave the way I describe? Regards, Sander