Re: POST body not getting forwarded
On 20 November 2014 05:17, Rodney Smith rodney...@gmail.com wrote: I have a problem where a client is sending audio data via POST, and while the request line and headers reach the server, the body of the POST does not. However, if the client uses the header Transfer-Encoding: chunked and chunks the data, it does get sent. What can I do to get the POST body sent without the chunking? What can be changed to get the incoming raw data packets to get forwarded? I'm using HAProxy in a forward proxy mode (option http_proxy). The function http_request_forward_body() has the message in the HTTP_MSG_DONE state, and the log line in process_session() line 1785 shows the incoming data is accumulating rqh=(s-req-buf-i). I don't have a direct answer on your observed problem, but I would point out that judging by my archives, the use of the http_proxy option is /extremely/ underrepresented on this list. I have no information if this might be the case, but I suggest that it would be possible for a bug to creep in and remain hidden for longer in this code path because of its relative rareness. This mean-time-to-bug-discovery might be compounded by the very (very!) broad demographic generalisation that people using this simplistic feature of haproxy /might/ be less inclined to upgrade for feature-based reasons, due to their architectures perhaps relying less on a fully-featured proxy being inline. In the absence of any other information, my next steps in your situation would be to see if I could replicate this problem in a different haproxy mode, not using option http_proxy. I absolutely recognise that that might not be possible, and I'm sure others on the list will help you discover the true root cause of the problem. I only mention it as it might not be obvious that this isn't a commonly discussed, hence maybe used, feature of haproxy. HTH, Jonathan
Re: Server definitions in backend require check ssl parameter in order for haproxy to work
On Thu, Nov 20, 2014 at 9:27 AM, Michael W Walker miwal...@us.ibm.com wrote: Hi, The conf file below is working with our application and I'm assuming I'm using SSL Termination at the proxy server correctly. But I'm not sure about the check statement in the backend definition where I define the servers. If I take out the check ssl parameter or use just check or check port I get a 502 Bad Gateway error. If I leave in the check ssl but don't include the ssl-server-verify none in the global section I get an error about no CA file specified. I don't think I should have to specify check ssl in the backend definition and it looks like ssl-server-verify none is just canceling it out. But adding those in seems to be the only way I can get it to work. I googled the 502 Bad Gateway and no CA file specified errors, but wasn't able to find useful info. Is there something obvious I'm missing here to get it to work without the check ssl, or is it ok to leave this in? Thanks, We're currently using haproxy 1.5.6. global log 127.0.0.1 local0 tune.ssl.default-dh-param 2048 maxconn 4000 ssl-server-verify none daemon defaults log global modehttp option httplog option dontlognull retries 3 option redispatch timeout server 5s timeout connect 5s timeout client 5s stats enable stats refresh 10s stats uri /stats frontend UCD_Frontend bind *:8080 bind *:8444 ssl crt /etc/SSLCerts/jsoc71cert.pem mode http reqadd X-Forwarded-Proto:\ https default_backend UCD_Servers backend UCD_Servers mode http stick-table type ip size 200k expire 30m stick on src default-server inter 1s option httpclose option redispatch retries 15 balance roundrobin server jsoc70 9.30.71.70:8445 check ssl server jsoc80 9.30.71.80:8443 check ssl *Michael Walker* CLM Certified miwal...@us.ibm.com 408-463-5023 Team Member IM DevOps Enablement Need help with DevOps? https://ibm.biz/IMDevOpsCoC Hi Michael, in your email, you speak about check ssl as a single parameter, while they are 2 different ones. Although, a check-ssl parameter exists. Something not obvious as well, is when does the 502 errors occurs? Is that to health checks or when browsing the application? Baptiste
Send client to a specific backend if header found in previous reply from server
Hi, I am trying to achieve the following : when a response from the application server contains a header named X-test, send the following requests from the client IP to another backend for 5 minutes. The goal is to send clients who abuse the servers to a slower queue. Here is what I got so far : In the frontend: stick-table type ip size 100k expire 5m store gpc1 tcp-request content track-sc1 src use_backend slow if { sc1_get_gpc1 gt 0 } In the backend: acl mark_as_high_usage sc1_inc_gpc1 gt 0 ??? if res.hdr_cnt(X-test) mark_as_high_usage Does this look good so far ? I am wondering what to use in place of the ???, because no action is to be taken in the backend, this serves only as a way to use the ACL and mark the IP using gpc1 so that the frontend sends its further connections to another backend. Thanks in advance. Sylvain Faivre.
http-request redirect prefix, substituting *only* the hostname without port
I've read the manual and searched extensively on this, but I seem to be missing something. How can I substitute just the requested *hostname* (excluding a non-standard port) in a redirect? I tried: http-request redirect prefix http://%[hdr(host)].example.com code 301 However, this preserves the non-standard port number if specified in the request, resulting in something like: Request: GET / HTTP/1.1 Host: original-hostname.com:81 Response: HTTP/1.1 301 Moved Permanently Location: http://original-hostname.com:81.example.com/ I'll have a fairly sizeable and dynamic lists of hosts, so adding explicit per-host ACLs/redirects is not desired. Is there any way to strip the requested port from the host header? Is there any other way to substitute just the requested hostname? Thanks! --Scott
Re: http-request redirect prefix, substituting *only* the hostname without port
Perfect! I figured there was a way to accomplish this, but I hadn't thought about manipulating the headers. Thanks! --Scott On Thu, Nov 20, 2014 at 11:17 AM, Baptiste bed...@gmail.com wrote: On Thu, Nov 20, 2014 at 4:39 PM, Scott Severtson ssevert...@digitalmeasures.com wrote: I've read the manual and searched extensively on this, but I seem to be missing something. How can I substitute just the requested *hostname* (excluding a non-standard port) in a redirect? I tried: http-request redirect prefix http://%[hdr(host)].example.com code 301 However, this preserves the non-standard port number if specified in the request, resulting in something like: Request: GET / HTTP/1.1 Host: original-hostname.com:81 Response: HTTP/1.1 301 Moved Permanently Location: http://original-hostname.com:81.example.com/ I'll have a fairly sizeable and dynamic lists of hosts, so adding explicit per-host ACLs/redirects is not desired. Is there any way to strip the requested port from the host header? Is there any other way to substitute just the requested hostname? Thanks! --Scott Hi Scott, You can try to strip it before generating the rewrite: http-request replace-value Host (.*):.* \1 if { hdr_sub(Host) : } http-request redirect prefix http://%[hdr(host)].example.com code 301 Baptiste
Re: http-request redirect prefix, substituting *only* the hostname without port
On Thu, Nov 20, 2014 at 5:35 PM, Scott Severtson ssevert...@digitalmeasures.com wrote: Perfect! I figured there was a way to accomplish this, but I hadn't thought about manipulating the headers. Thanks! --Scott Actually, the http-request rules should be like a firewall ruleset. They are processed in the same way they written. So next rule benefit from processing of previous one. Baptiste
Re: Server definitions in backend require check ssl parameter in order for haproxy to work
Hi, On Thu, Nov 20, Michael W Walker wrote: The conf file below is working with our application and I'm assuming I'm using SSL Termination at the proxy server correctly. But I'm not sure about the check statement in the backend definition where I define the servers. If I take out the check ssl parameter or use just check or check port I get a 502 Bad Gateway error. If I leave in the check ssl but don't include the ssl-server-verify none in the global section I get an error about no CA file specified. I don't think I should have to specify check ssl in the backend definition and it looks like ssl-server-verify none is just canceling it out. But adding those in http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#ssl-server-verify ssl-server-verify none means that when haproxy uses (https)ssl connection to backend server it doesn't check/verify the backend server certificate. seems to be the only way I can get it to work. I googled the 502 Bad Gateway and no CA file specified errors, but wasn't able to find useful info. http://cbonte.github.io/haproxy-dconv/configuration-1.5.html and http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5 Is there something obvious I'm missing here to get it to work without the check ssl, or is it ok to leave this in? Thanks, We're currently using haproxy 1.5.6. [...] backend UCD_Servers mode http stick-table type ip size 200k expire 30m stick on src default-server inter 1s option httpclose option redispatch retries 15 balance roundrobin server jsoc70 9.30.71.70:8445 check ssl server jsoc80 9.30.71.80:8443 check ssl Do your backend servers 9.30.71.70 / 9.30.71.80 have ssl enabled on ports 8445/8443 ? (Can you connect (from haproxy) server to your backend servers with telnet/netcat: telnet 9.30.71.70 8445 GET / HTTP/1.1 Host: your.service.hostname or do you need to use ssl to get a response: openssl s_client -connect 9.30.71.70:8445 GET / HTTP/1.1 Host: your.service.hostname ) If your backend servers use ssl on ports 8445/8443 then you'll need that ssl on server config line: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-ssl -Jarno -- Jarno Huuskonen
[PATCH] [RFC] Linux network namespace support for haproxy [v3]
Dear Willy list, Here's the next round of the patch(es), this time with the following major changes: * I've merged your fixes and updated the docs with the new namespace_list semantics * default namespace initialization was moved to a function called from init(): this way we can signal errors and do default namespace init only if it's required (ie. the config uses namespaces) I did not find a way to do this properly from a constructor function. I've also attached a fix for missing sanitization of the header length: the code did not check that hdr_v2-len is large enough to contain the address family specific address information. -- KOVACS Krisztian From 822ff588a2251acc1e9c9460db60625314f80b5f Mon Sep 17 00:00:00 2001 From: KOVACS Krisztian hid...@balabit.com Date: Mon, 17 Nov 2014 15:11:45 +0100 Subject: [PATCH 2/2] namespace: add Linux network namespace support This patch makes it possible to create binds and servers in separate namespaces. This can be used to proxy between multiple completely independent virtual networks (with possibly overlapping IP addresses) and a non-namespace-aware proxy implementation that supports the proxy protocol (v2). The setup is something like this: net1 on VLAN 1 (namespace 1) -\ net2 on VLAN 2 (namespace 2) -- haproxy proxy (namespace 0) net3 on VLAN 3 (namespace 3) -/ The proxy is configured to make server connections through haproxy and sending the expected source/target addresses to haproxy using the proxy protocol. The network namespace setup on the haproxy node is something like this: = 8 = $ cat setup.sh ip netns add 1 ip link add link eth1 type vlan id 1 ip link set eth1.1 netns 1 ip netns exec 1 ip addr add 192.168.91.2/24 dev eth1.1 ip netns exec 1 ip link set eth1.$id up ... = 8 = = 8 = $ cat haproxy.cfg frontend clients bind 127.0.0.1:50022 namespace 1 transparent default_backend scb backend server mode tcp server server1 192.168.122.4: namespace 2 send-proxy-v2 = 8 = A bind line creates the listener in the specified namespace, and connections originating from that listener also have their network namespace set to that of the listener. A server line either forces the connection to be made in a specified namespace or may use the namespace from the client-side connection if that was set. For more documentation please read the documentation included in the patch itself. Signed-off-by: KOVACS Tamas kta...@balabit.com Signed-off-by: Sarkozi Laszlo laszlo.sark...@balabit.com Signed-off-by: KOVACS Krisztian hid...@balabit.com --- Makefile | 9 +++- doc/network-namespaces.txt | 106 + include/common/namespace.h | 24 ++ include/proto/connection.h | 1 + include/types/connection.h | 3 ++ include/types/listener.h | 2 + include/types/server.h | 2 + src/backend.c | 6 ++- src/cfgparse.c | 46 +- src/connection.c | 59 --- src/haproxy.c | 13 ++ src/namespace.c| 114 + src/proto_tcp.c| 57 +-- src/server.c | 26 +++ src/session.c | 1 + 15 files changed, 453 insertions(+), 16 deletions(-) create mode 100644 doc/network-namespaces.txt create mode 100644 include/common/namespace.h create mode 100644 src/namespace.c diff --git a/Makefile b/Makefile index ac93fed..4671759 100644 --- a/Makefile +++ b/Makefile @@ -34,6 +34,7 @@ # USE_ZLIB : enable zlib library support. # USE_CPU_AFFINITY : enable pinning processes to CPU on Linux. Automatic. # USE_TFO : enable TCP fast open. Supported on Linux = 3.7. +# USE_NS : enable network namespace support. Supported on Linux = 2.6.24. # # Options can be forced by specifying USE_xxx=1 or can be disabled by using # USE_xxx= (empty string). @@ -617,6 +618,11 @@ TRACE_COPTS := $(filter-out -O0 -O1 -O2 -pg -finstrument-functions,$(COPTS)) -O3 COPTS += -finstrument-functions endif +ifneq ($(USE_NS),) +OPTIONS_CFLAGS += -DCONFIG_HAP_NS +BUILD_OPTIONS += $(call ignore_implicit,USE_NS) +endif + Global link options # These options are added at the end of the ld command line. Use LDFLAGS to # add options at the beginning of the ld command line if needed. @@ -657,7 +663,8 @@ OBJS = src/haproxy.o src/sessionhash.o src/base64.o src/protocol.o \ src/stream_interface.o src/dumpstats.o src/proto_tcp.o \ src/session.o src/hdr_idx.o src/ev_select.o src/signal.o \ src/acl.o src/sample.o src/memory.o src/freq_ctr.o src/auth.o \ - src/compression.o src/payload.o src/hash.o src/pattern.o src/map.o + src/compression.o src/payload.o src/hash.o src/pattern.o src/map.o \ + src/namespace.o EBTREE_OBJS = $(EBTREE_DIR)/ebtree.o \ $(EBTREE_DIR)/eb32tree.o $(EBTREE_DIR)/eb64tree.o \ diff --git
[no subject]
unsubscribe joel.vanvel...@bulletin.net -- [image: Inline image 1] Joel van Velden DEVOPS ENGINEER Bulletin.Net (NZ) Ltd. Auckland, New Zealand
[no subject]
unsubscribe joel.vanvelden+hapr...@bulletin.net -- [image: Inline image 1] Joel van Velden DEVOPS ENGINEER Bulletin.Net (NZ) Ltd. Auckland, New Zealand
HUMMEL vêtements chaussures de sport / SKIMP sacs et montres design
Version en ligne Ajouter nous votre carnet d’adresses LA KOURSE AUX BONS PLANS Vente du Vendredi 21 Novembre Chaussures et Vêtements de sport Jusqu'à -60% à partir de 17,45€Jusqu'à -60% Durée : 48H Sacs et montres designs -50% livraison sous 96H à partir de 29,95€-50% Durée : 48H BIENTÔT Ouverture Samedi 22/11 Bagagerie Outdoor à partir de 5,94€Jusqu'à -60% Durée : 48H ENTREPRISE FRANCAISESATISFAIT OU REMBOURSÉPAIEMENT 100% SECUREPAIEMENT PAYPALSUIVEZ-NOUS SUR FACEBOOK Veuillez me retirer de votre liste de diffusion
Re: haproxy 1.5.8 segfault
Hi Willy, On 2014/11/19 2:31, Willy Tarreau wrote: On Tue, Nov 18, 2014 at 08:23:57PM +0200, Denys Fedoryshchenko wrote: Thanks! Seems working for me :) Will test more tomorrow. There's no reason it would not, otherwise we'd have a different bug. When I'm unsure I ask for testing before committing, but here there was no doubt once the issue was understood :-) Willy A so quick fix. Cool! :-) In fact, I have also experienced this kind issue before. Of course it is not caused by original HAProxy codes but my own codes added to HAProxy. However, the real reason is the same as this issue: the memory allocated from pool is not reset properly. So I have an idea for this kind issue: how about HAProxy reset the memory allocated from pool directly in pool_alloc2(). If we worry about that the performance may be decreased by calling memset() in each pool_alloc2(), a new option which allows user to enable or disable memset() in pool_alloc2() can be added into HAProxy. Since it is not an urgent issue, just take your time. :-) -- Best Regards, Godbach
Re: haproxy 1.5.8 segfault
Hi Godbach! On Fri, Nov 21, 2014 at 11:02:52AM +0800, Godbach wrote: Hi Willy, On 2014/11/19 2:31, Willy Tarreau wrote: On Tue, Nov 18, 2014 at 08:23:57PM +0200, Denys Fedoryshchenko wrote: Thanks! Seems working for me :) Will test more tomorrow. There's no reason it would not, otherwise we'd have a different bug. When I'm unsure I ask for testing before committing, but here there was no doubt once the issue was understood :-) Willy A so quick fix. Cool! :-) In fact, I have also experienced this kind issue before. Of course it is not caused by original HAProxy codes but my own codes added to HAProxy. However, the real reason is the same as this issue: the memory allocated from pool is not reset properly. And that's intended. pool_alloc2() works exactly like malloc() : the caller is responsible for initializing it if needed. So I have an idea for this kind issue: how about HAProxy reset the memory allocated from pool directly in pool_alloc2(). If we worry about that the performance may be decreased by calling memset() in each pool_alloc2(), a new option which allows user to enable or disable memset() in pool_alloc2() can be added into HAProxy. We only do that (partially) when using memory poisonning/debugging (to reproduce issues easier). Yes performance suffers a lot when doing so, especially when using large buffers, and people using large buffers are the ones who care the most about performance. I'd agree to slightly change the pool_alloc2() to *always* memset the area when memory poisonning is in place, so that developers can more easily detect if they missed something. But I don't want to use memset all the time as a brown paper bag so that developers don't have to be careful. We're missing some doc of course, and people can get trapped from time to time (and I do as well), so this is what we must improve, and not have the code hide the bugs instead. What is really needed is that each field of session/transaction is documented : who uses it, when, and who's responsible for initializing it. Here with the capture, I missed the fact that the captures are part of a transaction, thus were initialized by the HTTP code, so when using tcp without http, there's an issue... A simple comment like /* initialized by http_init_txn() */ in front of the capture field in the struct would have been enough to avoid this. This is what must be improved. We also need to write developer guidelines to remind people to update the doc/comments when modifying the API. I know it's not easy, I miss a number of them as well. Cheers, Willy
Re: [PATCH] [RFC] Linux network namespace support for haproxy [v3]
Hi Krisztian, On Thu, Nov 20, 2014 at 08:39:24PM +0100, KOVACS Krisztian wrote: Dear Willy list, Here's the next round of the patch(es), this time with the following major changes: * I've merged your fixes and updated the docs with the new namespace_list semantics * default namespace initialization was moved to a function called from init(): this way we can signal errors and do default namespace init only if it's required (ie. the config uses namespaces) I did not find a way to do this properly from a constructor function. I've also attached a fix for missing sanitization of the header length: the code did not check that hdr_v2-len is large enough to contain the address family specific address information. Great. The patch looks good now, it builds fine with and without NS support. I have merged it so that it gets broader review. I think we'll change a few things in the near future, such as probably rename namespace_list to namespaces to be more consistent with peers vs peer for example. I also think we don't need that create_server_socket() function, and we'd rather open-code it in the caller so that we don't use 2 layers of obfuscation on top of socket() here. As you see, these are very tiny details, what matters the most is that it's now merged and we can all start to play with it. So, many thanks for this work! Willy