Re: Haproxy running on ipv6 and http-in/

2023-12-01 Thread Holger Just
Hi Christoph,

Christoph Kukulies wrote on 2023-12-01 09:59:
>> Seems normal, status code is 301 and you have "redirect scheme https code
>> 301 if !{ ssl_fc }"
>> Is this what you expect or do you think there're some errors ?
>
> But the http-in/is bugging me.

This tells you that the request was accepted by and handled in the
http-in frontend without being forwarded to any backend
server.

This is expected since the request was answered by HAProxy
itself with the 301 redirect. The LR-- termination state in the log line
confirms this. To quote the documentation:

>  LR   The request was intercepted and locally handled by HAProxy. Generally
>   it means that this was a redirect or a stats request.

All the best,
Holger

-- 
Holger JUST (he/him)



Re: HAProxy 1.8.4 crashing

2018-07-05 Thread Holger Just
Hi Praveen,

There are several fixes for segfaults which might occur in your version
of HAProxy. Before checking anything else, you should upgrade to the
latest version of HAProxy 1.8 (currently 1.8.12).

See http://www.haproxy.org/bugs/bugs-1.8.4.html for bugs fixed in this
version compared to your current version.

Regards,
Holger

UPPALAPATI, PRAVEEN wrote:
> 
> Hi Haproxy Team,
> 
> Our Prod Haproxy instance is crashing with following error in 
> /var/log/messages:
> 
> Jun 28 17:52:30 zlp32359 kernel: haproxy[55940]: segfault at 60 ip 
> 0045b0a9 sp 7f4ef6b9f010 error 4 in haproxy[40+12b000]
> Jun 28 17:56:01 zlp32359 systemd: Started Session 73792 of user root.
> Jun 28 17:56:01 zlp32359 systemd: Starting Session 73792 of user root.
> Jun 28 17:56:01 zlp32359 LMC: Hardware Manufacturer = VMWARE
> Jun 28 17:56:01 zlp32359 LMC: Hardware Product Name = VMware Virtual Platform
> Jun 28 17:56:01 zlp32359 LMC: Hardware Serial # = VMware-42 29 ea 5e 6c 7b 5b 
> 49-ca 32 48 fb 5a 9d e7 d6
> Jun 28 17:56:01 zlp32359 LMC: ### NO PID_MAX ISSUES FOUND ###
> Jun 28 17:56:01 zlp32359 LMC: ### NO READ ONLY FILE SYSTEM ISSUES FOUND ###
> Jun 28 17:56:01 zlp32359 LMC: ### NO SCSI ABORT ISSUES FOUND ###
> Jun 28 17:56:02 zlp32359 LMC: ### NO SCSI ERROR ISSUES FOUND ###
> 
> HaProxyVersion:
> 
> haproxy -v
> HA-Proxy version 1.8.4-1deb90d 2018/02/08
> Copyright 2000-2018 Willy Tarreau 
> 
> Cmd to run haproxy :
> 
> //opt/app/haproxy/sbin/haproxy -D -f //opt/app/haproxy/etc/haproxy.cfg -f 
> //opt/app/haproxy/etc/ haproxy-healthcheck.cfg -p 
> //opt/app/haproxy/log/haprox.pid
> 
> Let me know if you need more information and if you need more logging let me 
> know how can enable it. I am not able to reproduce this in our dev 
> box(Probably I am not able to replicate the traffic on dev).
> 
> Thanks,
> Praveen.
> 
> 
> 
> 



Re: Domain fronting

2018-05-07 Thread Holger Just
Hi Mildis (and this time the list too),

Mildis wrote:
> Is there a simple way to limit TLS domain fronting by forcing SNI and Host 
> header to be the same ?
> Maybe add an optional parameter like "strict_sni_host" ?

You can do a little trick here to enforce this without having to rely on
additional code in HAProxy.

What you can do is to build a new temporary HTTP header which contains
the concatenated values of the HTTP host header and the SNI server name
value. Using a regular expression, you can then check that the two
values are the same.

This approach is a bit special since regular expressions (or generally
any compared value) needs to be static in HAProxy can can't contain
dynamically generated values.

I often the following configuration snippet in my frontends (probably
remove newlines added in this mail):

# Enforce that the TLS SNI field (if provided) matches the HTTP hostname
# This is a bit "hacky" as HAProxy neither allows to compare two
# headers directly nor allows dynamic patterns in general. Thus, we
# concatenate the HTTP Header and the SNI field in an  internal header
# and check if the same value is repeated in that header.
http-request set-header X-CHECKSNI %[req.hdr(host)]==%[ssl_fc_sni] if {
ssl_fc_has_sni }

# This needs to be a named capture because of "reasons". Backreferences
# to normal captures are rejected by (my version of) HAProxy
http-request deny if { ssl_fc_has_sni } ! { hdr(X-CHECKSNI) -m reg -i
^(?.+)==\1$ }

# Cleanup after us
http-request del-header X-CHECKSNI

Cheers, Holger



Re: Will HAProxy community supports mailers section?

2017-08-24 Thread Holger Just
Hi Rajesh,

Rajesh Kolli wrote:
> i am getting this error if i use mailers section in my configuration.

The ability to send mail alerts (and thus to configure this with a
mailers section) was added in HAProxy 1.6. If you use an older version,
this feature is not yet available to you.

Once you update to a newer version (e.g. the current version 1.7.8), the
feature should be usable for you.

Regards,
Holger



Re: very low requests per second rate with option forceclose (now with details)

2017-08-16 Thread Holger Just
Hi Stefan

Stefan Sticht wrote:
> I also can test the webserver directly bypassing the haproxy completely
> (apache2.4 on webserver has "KeepAlive Off” configured)
> $ ab -v 1 -c 10 -n 1000 http://10.27.100.45/test/index.html | grep -e
> Requests -e Complete -e Failed
> Complete requests:  1000
> Failed requests:0
> Requests per second:7948.87 [#/sec] (mean)

Here, you are running ab over plain HTTP to your backend server.

>> Without forceclose:
>>
>> $ ab -v 1 -k -c 10 -n 1000 https://w:8001/test/index.html | grep -e
>> Requests -e Complete -e Failed
>> Complete requests:  1000
>> Failed requests:0
>> Requests per second:1112.29 [#/sec] (mean)
>>
>> With foreclose:
>>
>> $ ab -v 1 -k -c 10 -n 1000 https://w:8003/test/index.html | grep -e
>> Requests -e Complete -e Failed
>> Complete requests:  1000
>> Failed requests:0
>> Requests per second:25.86 [#/sec] (mean)

However, with these tests, you are running over TLS. This makes a huge
difference in performance.

Since the most expensive part of a TLS tunnel is to establish the
connection (which involves slow asymmetric encryption), you are
basically constrained here.

Now, in the real world, most clients will try to re-use existing TLS
sessions using wither server-stored TLS sessions or client-stored TLS
tickets, both of which allows them to skip the most expensive part of a
new connection.

Apache bench does not re-use sessions. As such, what you are effectively
benchmarking here is the ability of your server to handle new TLS
handshakes. When disabling keep-alive, ab has to create a completely new
TLS connection for each request while it reuses the existing connections
with keep-alive enabled. This along can explain the performance
differences you see there

Now, even with a server without AES-NI support in the CPU, 25 handshakes
per second and core is still pretty low. With a modern CPU, I would
expect about 350 handshakes per second and core.

In any case, you could increase performance by running with a larger
nbproc for your SSL handling (e.g. as many as you have cores or even
hyperthreads) and by using a CPU which has AES-NI support and is this
able to perform many expensive operations for the asymmetric crypto in
hardware. Getting rid of virtualization layers also helps tremendously.

The biggest performance increase when using HTTPS in the real world
however would probably to actually enable keep-alive at least between
the client and haproxy.

Regards,
Holger



Re: fields vs word converter, unexpected "0" result

2017-08-01 Thread Holger Just
Hi Daniel,

Daniel Schneller wrote:
> root@haproxy-1:~# curl -s http://127.0.0.1:8881
> Aug  1 15:12:55 haproxy-1 haproxy[3049]: 127.0.0.1:45875 
> [01/Aug/2017:15:12:55.198] "0"
> 
> While the first three are expected, the last one confuses me. Why would 
> leaving the header out result in “0” being logged?

Because the header is not left out in your request. Instead, the raw
request is sent as follows

GET / HTTP/1.1
Host: 127.0.0.1:8881
User-Agent: curl/7.43.0
Accept: */*

The HTTP 1.1 specification requires that a Host header is always sent
along with the request. Curl specifically always sends the host from the
given URL, unless it was explicitly overwritten.

Thus, in your case the fetch extracts the second part from the given IP
address, which is 0 in your case.

Regards,
Holger



Re: Trouble getting rid of Connection Keep-Alive header

2017-06-21 Thread Holger Just
Hi Mats,

Mats Eklund wrote:
> I am running a load balanced Tomcat application on Openshift Online
> v2, with HAProxy ver. 1.4.22 as load balancer.

With your current config, HAProxy will add a "Connection: close" header
to responses. However, since you mentioned you are running this in an
OpenShift environment, there might (and probably is)  be another layer
of proxies involved between your HAProxy and your client.

Since you are speaking plain HTTP here, this other proxy might chose to
use keep-alive connections towards the client, similar to how HAProxy's
option http-server-close works. In that case, you would have to change
the configuration of this other proxy too.

Best,
Holger

P.S. HAProxy 1.4 is OLD and receives only critical fixes now. You should
seriously consider upgrading to a newer version. The current stable
version if 1.7.26.

At the very least, you should upgrade to the latest 1.4 version 1.4.27
has fixed 83 known bugs since 1.4.22. See
https://www.haproxy.org/bugs/bugs-1.4.22.html for details.



Re: [PATCH] Add b64dec sample converter

2017-05-12 Thread Holger Just
Hi Willy,

Willy Tarreau wrote:
> The thing is that we normally don't backport any feature anymore to
> stable branches due to the terrible experience in 1.4 where too much
> riskless stuff was backported, then fixed, then removed etc... making
> each subsequent version a pain for certain users.
> 
> [...]
> 
> And this gives incentive to users of older releases to start to 
> consider new ones :-)

Those are all very good reasons for not backporting the patch. I hadn't
considered that an exception is usually not important enough to break
the default of having rock solid stable versions.

That is indeed a very good hard rule to have in a software maintainer's
handbook. Thanks for taking the time to explain your reasoning and also
for saying no.

Cheers,
Holger



Re: [PATCH] Add b64dec sample converter

2017-05-12 Thread Holger Just
Hi Willy,

thanks for applying the patch!

Willy Tarreau wrote:
> Thanks for the warning, much appreciated. It made me re-read it after
> applying it. But your code is fine, no problem detected! So you're
> becoming a C programmer ;-)

Yeah, we will see about that :)

>> Once verified, I think this converter can be safely added to the
>> supported stable versions of HAProxy.
> 
> Yes I think it can make sense to backport it at least to 1.7, it can
> help sometimes.

That would be much appreciated. I think a backport even down to 1.6 is
pretty risk-free given that the structure there hasn't changed much
lately and the patch applies cleanly even on 1.6.0.

Cheers,
Holger



Re: reqrep syntax

2017-05-08 Thread Holger Just
Hi Ari,

Aristedes Maniatis wrote:
> In the manual [1] there is an example for using reqrep with syntax
> like this:
> 
> # replace "/static/" with "/" at the beginning of any request path. 
> reqrep ^([^\ :]*)\ /static/(.*) \1\ /\2
> 
> [...]
> 
> Firstly, is there no better/cleaner way to rewrite the path than this
> messy regex which seems to process all the http headers rather than
> just the request line that contains the path?

With a modern HAProxy (i.e. anything >= 1.6), you can just use

http-request set-path %[path,regsub(^/static,/)]

Generally, you should always use the http-request rules instead of the
(older) reg* rules since the http-request rules are much more powerful,
cleaner and have the advantage of always being evaluated in the defined
order which the reg* rules are not.

Old guides like the one you linked to in [2] were written before the
http-request rules got as powerful as they are today. The guide was
written for HAProxy 1.4 which is quite outdated now.

> Secondly, why was this particular regex used in the example? If I'm
> reading this correctly, quoting the string would simplify it to:
> 
> reqrep '^([^ :]*) /static/(.*)' '\1 /\2'

String quoting was only introduced in HAProxy 1.6. Before that (and
during the time reqrep was the only way to manipulate HTTP requests),
you always had to escape spaces with a backslash.

> But is this even clearer and safer:
> 
> reqrep '^([A-Z]+) /static(/.*)' '\1 \2'
> 
> or narrow this down (where appropriate to the type of request) to
> 
> reqrep '^(GET|POST) /static(/.*)' '\1 \2'
> 
> Perhaps the example would be clearer like this (even though this is
> slower and not needed here, it is clearly demonstrating the point):
> 
> reqrep '^(GET|POST) /static(/.*) (HTTP/.+)' '\1 \2 \3'

These are all valid variations for the common case. Still, users might
decide to also send lower-case HTTP verbs or other variations. The
syntax from the guides is the most generic one.

Still, you should probably just get rid of any regrep rules and just use
http-request.

Cheers,
Holger

[2]
https://fromanegg.com/post/2014/12/05/how-to-rewrite-and-redirect-with-haproxy/



Re: Compare against variable string in ACL

2017-05-08 Thread Holger Just
Hi Tim.

Tim Düsterhus wrote:
> I basically want an ACL that matches if 'hdr(host) == ssl_fc_sni' to use
> programming language terminology.

This is not directly possible right now using haproxy ACLs since they
are only ablle to to compare a dynamic value (the fetch) to a static
value. There is however a "trick" to still pull this off without having
to dive into Lua.

# We concatenate the HTTP Header and the SNI field in an internal header
# and check if the same value is repeated in that header.
http-request set-header X-CHECKSNI %[req.hdr(host)]==%[ssl_fc_sni] if {
ssl_fc_has_sni }

# This needs to be a named capture because of "reasons".
# Back-References to normal captures seem to be rejected by HAProxy
http-request deny if { ssl_fc_has_sni } ! { hdr(X-CHECKSNI) -m reg -i
^(?.+)==\1$ }

# Cleanup after us
http-request del-header X-CHECKSNI

We use basically this configuration snippet in production for quite some
years now and it works great.

Cheers,
Holger



[PATCH] Add b64dec sample converter

2017-05-05 Thread Holger Just
Hi all,

This patch against current master adds a new b64dec converter. It takes
a base64 encoded string and returns its decoded binary representation.

This converter can be used to e.g. extract the username of a basic auth
header to add it to the log:

acl BASIC_AUTH hdr_beg(Authorization) "Basic "
http-request capture hdr(Authorization),regsub(^Basic\ ,),b64dec if
BASIC_AUTH

I'm open for suggestions for a better name for the converter.
base64_decode might work but doesn't suit the code formatting well and
is pretty long...

As a note to reviewers: please be aware that I'm not a C programmer at
all and I am way outside of my comfort zone here. As such, this function
might have unhandled edge-cases. I tried to model it according to the
existing base64 converter and my understanding of how the converters are
supposed to work but might have missed something.

Once verified, I think this converter can be safely added to the
supported stable versions of HAProxy.

Cheers,
Holger
>From b6d63d491a82d9297b649b0a4bf043b93e8161ad Mon Sep 17 00:00:00 2001
From: Holger Just <he...@holgerjust.de>
Date: Sat, 6 May 2017 00:56:53 +0200
Subject: [PATCH] MINOR: sample: Add b64dec sample converter

Add "b64dec" as a new converter which can be used to decode a base64
encoded string into its binary representation. It performs the inverse
operation of the "base64" converter.
---
 doc/configuration.txt |  4 
 src/sample.c  | 18 ++
 2 files changed, 22 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 06a1a2af..25ab2772 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -12410,6 +12410,10 @@ base64
   transfer binary content in a way that can be reliably transferred (eg:
   an SSL ID can be copied in a header).
 
+b64dec
+  Converts (decodes) a base64 encoded input string to its binary
+  representation. It performs the inverse operation of base64().
+
 bool
   Returns a boolean TRUE if the input value of type signed integer is
   non-null, otherwise returns FALSE. Used in conjunction with and(), it can be
diff --git a/src/sample.c b/src/sample.c
index 71d4e32a..2e65e881 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -1481,6 +1481,23 @@ static int sample_conv_bin2base64(const struct arg 
*arg_p, struct sample *smp, v
return 1;
 }
 
+static int sample_conv_base642bin(const struct arg *arg_p, struct sample *smp, 
void *private)
+{
+   struct chunk *trash = get_trash_chunk();
+   int bin_len;
+
+   trash->len = 0;
+   bin_len = base64dec(smp->data.u.str.str, smp->data.u.str.len, 
trash->str, trash->size);
+   if (bin_len < 0)
+   return 0;
+
+   trash->len = bin_len;
+   smp->data.u.str = *trash;
+   smp->data.type = SMP_T_BIN;
+   smp->flags &= ~SMP_F_CONST;
+   return 1;
+}
+
 static int sample_conv_bin2hex(const struct arg *arg_p, struct sample *smp, 
void *private)
 {
struct chunk *trash = get_trash_chunk();
@@ -2715,6 +2732,7 @@ static struct sample_conv_kw_list sample_conv_kws = {ILH, 
{
 #endif
 
{ "base64", sample_conv_bin2base64,0,NULL, SMP_T_BIN,  
SMP_T_STR  },
+   { "b64dec", sample_conv_base642bin,0,NULL, SMP_T_STR,  
SMP_T_BIN  },
{ "upper",  sample_conv_str2upper, 0,NULL, SMP_T_STR,  
SMP_T_STR  },
{ "lower",  sample_conv_str2lower, 0,NULL, SMP_T_STR,  
SMP_T_STR  },
{ "hex",sample_conv_bin2hex,   0,NULL, SMP_T_BIN,  
SMP_T_STR  },
-- 
2.12.0



Re: Restricting RPS to a service

2017-04-19 Thread Holger Just
Hi Krishna,

Krishna Kumar (Engineering) wrote:
> Thanks for your response. However, I want to restrict the requests
> per second either at the frontend or backend, not session rate. I
> may have only 10 connections from clients, but the backends can
> handle only 100 RPS. How do I deny or delay requests when they
> cross a limit?

A "session" is this context is equivalent to a request-response pair. It
is not connected to a session of your applciation which might be
represented by a session cookie.

As such, to restrict the number of requests per second for a frontend,
rate-limit sessions is exactly the option you are looking for. It does
not limit the concurrent number of sessions (as maxconn would do) but
the rate with which the requests are comming in.

If there are more requests per second than the configured value, haproxy
waits until the session rate drops below the configured value. Once the
socket's backlog backlog is full, requests will not be accepted by the
kernel anymore until it clears.

If you want to deny requsts with a custom http error instead, you could
use a custom `http-request deny` rule and use the fe_sess_rate or
be_sess_rate values.

Cheers,
Holger



Re: Restricting RPS to a service

2017-04-19 Thread Holger Just
Hi Krishna,

Krishna Kumar (Engineering) wrote:
> What is the way to rate limit on the entire service, without caring 
> about which client is hitting it? Something like "All RPS should be <
> 1000/sec"?

You can set a rate limit per frontend (in a frontend section):

rate-limit sessions 1000

or globally per process [2] (in the global section):

maxsessrate 1000

Cheers,
Holger

[1]
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-rate-limit%20sessions
[2]
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#3.2-maxsessrate




Re: ACL with dynamic pattern

2017-04-11 Thread Holger Just
Hi Alexander,

Alexander Lebedev wrote:
> I want to implement CSRF check with haproxy.
> I want to check cookie value matched the header value and deny request
> if they're didn't equal.

The ACLs are only equipped to compare a dynamic value (e.g. from a
fetch) with a statically define value. It is not possible to compare two
dynamic values.

There is a workaround for this however. It consists in building a
specific dynamic value and checking if that suits your requirements.

The following example shows how this can work. At first, we are building
a new temporary header containing the value from the cookie and the
header concatenated. We have to use a header here since the
concatenation is a log-format statement which is not allowed for
`http-request set var`.

In the ACL, we then check this concatenated value using a specific
regular expression, It checks if the matched string contains the exact
value two times, separated by the equal signs we added in the header value.

When I build this some time ago, we had to declare a back-reference in
the regex for this to work. I'm not sure it is still required, but
shouldn't hurt in any case.

# Build the temporary header
http-request set-header X-CHECK %[req.cook(token)]==%[req.hdr(token)]

# Deny the request if the header value is not valid
acl valid_csrf_token hdr(X-CHECK) -m reg ^(?.+)==\1$
http-request deny unless valid_csrf_token

# Cleanup
http-request del-header X-CHECK

This should more-or-less work for your use case.

Two caveats apply:

(1) Your CSRF scheme seems a bit unconventional. For this to be secure,
please also ensure that the cookie is set to HttpOnly and ensure that
that the value your client sets in the header can not be gathered from
outside your origin (and is esp. not indirectly gathered from the cookie
by the client).

(2) The comparison of the values in the ACL is not done in constant
time. That means checking the values takes more or less time depending
on the inputs. An attacker can use this to check a lot of values and
measure the response time of HAProxy. With some statistical checks, an
attacker can learn the value of the user's cookie by triggering several
(probably: a lot) requests and measuring these tiny timing differences.
To actually be secure, you should instead perform the CSRF check in your
application and ensure you use a constant-time comparison algorithm there.

Best,
Holger



Re: client connections being help open, despite option foceclose

2017-03-31 Thread Holger Just
Hi Patrick,

Patrick Kaeding wrote:
> I have one frontend, listening on port 443, and two backends, which send
> traffic to either port 5050 or 5051.  The haproxy stats screen is
> showing many more frontend connections than backend (in one case, 113k
> on the frontend, 97k on one backend, and 3k on the other backend).

Most browser nowadays speculatively create more than one connection to
the server (HAProxy in this case) to use them for parallel downloading
of assets.

Now, such a connection to the frontend will only result in a connection
to the backend once the full HTTP request have been received and parsed
by HAProxy. Since some of these speculative connections will just sit
idle and will eventually get closed without having received any data,
the number of frontend-connections is almost always higher than the sum
of backend-connections.

In addition to that, you might observe more connections accepted by the
kernel than are shown in HAProxy's frontend. This is due to the fact
that a new connection is only forwarded to HAProxy from the kernel once
it is fully established and HAProxy has actively accepted in.

If you are running against your maxconn or generally on high load, some
connections might be accepted by the kernel already but not yet handled
by HAProxy.

Cheers,
Holger



Re: LUA: using converters in init phase

2017-03-24 Thread Holger Just
Hi Gabor,

Gabor Lekeny wrote:
> I would like to create a service which balances the HTTP requests on 
> many servers without passing through the traffic on the proxy:
> actually it would redirect (HTTP 3xx) to the target server.

You might be able to use the redir parameter [1] on the server line
already without having to dive into Lua. Since it follows HAProxy's
normal server selection algorithms, you wouldn't have to re-implement
(or even query) them in Lua.

To quote the docs at
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#5.2-redir

The "redir" parameter enables the redirection mode for all GET and HEAD
requests addressing this server. This means that instead of having
HAProxy forward the request to the server, it will send an "HTTP 302"
response with the "Location" header composed of this prefix immediately
followed by the requested URI beginning at the leading '/' of the path
component. That means that no trailing slash should be used after
. All invalid requests will be rejected, and all non-GET or HEAD
requests will be normally served by the server. Note that since the
response is completely forged, no header mangling nor cookie insertion
is possible in the response. However, cookies in requests are still
analysed, making this solution completely usable to direct users to a
remote location in case of local disaster. Main use consists in
increasing bandwidth for static servers by having the clients directly
connect to them. Note: never use a relative location here, it would
cause a loop between the client and HAProxy!

Example :

server srv1 192.168.1.1:80 redir http://image1.mydomain.com check

Best,
Holger



Re: Strange behavior of sample fetches in http-response replace-header option

2017-02-08 Thread Holger Just
Hi Christopher,

Christopher Faulet wrote:
> You did well to reopen the issue. And you're right, this bug is similar
> to the one on redirect rules. I submitted a patch and it will be merged
> soon by Willy (see "[PATCH] 2 fixes for replace-header rules").

Thank you for the fix!

Best,
Holger



Re: Strange behavior of sample fetches in http-response replace-header option

2017-02-07 Thread Holger Just
Hi all,

I just checked and the issue is still present in current master. Could
you maybe have a look at this issue?

It smells a bit like this could potentially be connected to the issue
discussed in the thread "Lua sample fetch logging ends up in response
when doing http-request redirect". However, I couldn't reproduce my
issue when `http-request redirect`, neither with the patch nor without
so it might also be a red herring.

Regards,
Holger

Holger Just wrote:
> Hi there,
> 
> I observed some strange behavior when trying to use a `http-response
> replace-header` rule. As soon as I start using fetched samples in the
> replace-fmt string, the resulting header value is garbled or empty
> (depending on the HAProxy version).
> 
> Please consider the config in the attachment of this mail (in order to
> preserve newlines properly). As you can see, we add a Set-Cookie header
> to the response in the backend which is altered again in the frontend.
> Specifically, the configuration intends to replace the expires tag of
> the cookie as set by the backend and set a new value.
> 
> With this configuration, I observe the following headers when running a
> `curl http://127.0.0.1:8000`:
> 
> HAProxy 1.5.14 and haproxy-1.5 master:
> 
> Set-Cookie: WeWeWeWeWeWeWeWeWeWeWeWeWeWeWeW
> X-Expires: Wed, 05 Oct 2016 11:51:01 GMT
> 
> haproxy-1.6 master and current haproxy master:
> 
> Set-Cookie:
> X-Expires: Wed, 05 Oct 2016 11:51:01 GMT
> 
> The `http-response replace-header` rule works fine if we replace the
> sample fetch with a variable like %T. In that case, the value is
> properly replaced. Any use of a sample fetch results in the above
> garbled output.
> 
> The exact same behavior can be observed if a "real" backend is setting
> the original Set-Cookie header instead of using the listen / backend
> hack in the self-contained config I posted.
> 
> Am I doing something wrong here or is it possible that there is an issue
> with applying sample fetches here?
> 
> 
> I tested with both on freshly compiles HAProxies on MacOS with `make
> TARGET=generic` as well as on a HAProxy 1.5.14 with the following stats:
> 
> HA-Proxy version 1.5.14 2015/07/02
> Copyright 2000-2015 Willy Tarreau <wi...@haproxy.org>
> 
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
>   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
> 
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8
> Compression algorithms supported : identity, deflate, gzip
> Built with OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
> Running on OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.35 2014-04-04
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> 
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Thanks for your help,
> Holger



Re: Hitting rate limit?

2017-01-17 Thread Holger Just
Hi Atha,

Atha Kouroussis wrote:
> Output from ab against haproxy:
> Concurrency Level:  200
> Time per request:   49.986 [ms] (mean)

If you check these numbers, you'll notice that with a time of 49 ms per
request and 200 concurrent requests, you;ll end up at exactly 4000
requests / second:

(1000 ms / (49 ms / req)) * (200 req / s) = 4081 req / s

Thus, in order to achieve a higher throughput, you have two options:

* You could try to reduce the time required per request, which probably
helps a certain amount,
* or you could increase the concurrency of your requests with ab. Since
in the real world, you'll probably get fewer requests per source form
way more sources, this would probably simulate your actual production
load even better.

Best,
Holger



Update of SSL certificate on haproxy.org

2016-12-27 Thread Holger Just
Hi Willy,

Recently, you updated the SSL certificate of haproxy.org,
git.haproxy.org, ... to a new certificate from StartSSL.

Unfortunately, recently, there was an incident of several misissued
certificates by this CA as well as shady business decisions involving
WoSign which resulted in Chrome [1] and Firefox [2] no longer trusting
the CA's root certificates with their next respective releases. Apple
has revoked trust to certificates issued after December 1 [3] which just
barely doesn't affect the current cert. I have found no statement by
Microsoft.

With the next release of Firefox and Chrome, users using the https
versions of the websites will thus receive a strongly worded error
similar to other TLS errors involving invalid certificates.

I'd thus recommend to update the certificate again and use a more
trusted CA. With Let's Encrypt being widely supported, well automateable
and also free, I'd recommend this one.

Best,
Holger

[1]
https://security.googleblog.com/2016/10/distrusting-wosign-and-startcom.html
[2]
https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-and-startcom-certificates/
[3] https://support.apple.com/en-us/HT202858



Re: HAProxy reloads lets old and outdated processes

2016-10-25 Thread Holger Just
Hey Willy,

Willy Tarreau wrote:
> I absolutely despise systemd and each time I have to work on the
> wrapper I feel like I'm going to throw up. So for me working on this
> crap is a huge pain each time. But I'm really fed up with seeing
> people having problems in this crazy environment because one
> clueless guy decided that he knew better than all others how a daemon
> should reload, so whatever we can do to make our users' lifes easier
> in captivity should at least be considered.

Just to be sure, I don't like systemd for mostly the reasons you
mentioned. However, I do use the systemd wrapper to reliably run HAProxy
under runit for a couple of years now.

Since they (similar to most service managers) also expect a service to
have one stable parent process even after reloading, the systemd wrapper
acts as a nice workaround to facilitate reloading. The same wrapper
allows simple service handling with Solaris's SMF and is a much better
solution than the crude python script I wrote a couple of years ago for
this simple process.

I guess what I've been trying to say is: the wrapper is absolutely
useful for about any process manager, not just systemd and I would love
to see it stay compatible with other process managers like runit.

Thanks for the great work Willy, here and on the Kernel.

Regards, Holger



Strange behavior of sample fetches in http-response replace-header option

2016-10-05 Thread Holger Just
Hi there,

I observed some strange behavior when trying to use a `http-response
replace-header` rule. As soon as I start using fetched samples in the
replace-fmt string, the resulting header value is garbled or empty
(depending on the HAProxy version).

Please consider the config in the attachment of this mail (in order to
preserve newlines properly). As you can see, we add a Set-Cookie header
to the response in the backend which is altered again in the frontend.
Specifically, the configuration intends to replace the expires tag of
the cookie as set by the backend and set a new value.

With this configuration, I observe the following headers when running a
`curl http://127.0.0.1:8000`:

HAProxy 1.5.14 and haproxy-1.5 master:

Set-Cookie: WeWeWeWeWeWeWeWeWeWeWeWeWeWeWeW
X-Expires: Wed, 05 Oct 2016 11:51:01 GMT

haproxy-1.6 master and current haproxy master:

Set-Cookie:
X-Expires: Wed, 05 Oct 2016 11:51:01 GMT

The `http-response replace-header` rule works fine if we replace the
sample fetch with a variable like %T. In that case, the value is
properly replaced. Any use of a sample fetch results in the above
garbled output.

The exact same behavior can be observed if a "real" backend is setting
the original Set-Cookie header instead of using the listen / backend
hack in the self-contained config I posted.

Am I doing something wrong here or is it possible that there is an issue
with applying sample fetches here?


I tested with both on freshly compiles HAProxies on MacOS with `make
TARGET=generic` as well as on a HAProxy 1.5.14 with the following stats:

HA-Proxy version 1.5.14 2015/07/02
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
Running on OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Thanks for your help,
Holger
defaults
  mode http
  timeout client 5s
  timeout server 5s
  timeout connect 5s

frontend fe
  bind :8000

  http-response add-header X-Expires %[date(3600),http_date]
  http-response replace-header Set-Cookie ^(.*expires=).*$ 
\1%[date(3600),http_date]

  use_backend be

backend be
  http-response add-header Set-Cookie key=value;\ path=/;\ expires=Wed,\ 
1-Jan-2020\ 23:59:59\ GMT

  server empty 127.0.0.1:8081

listen empty
  bind 127.0.0.1:8081


Re: nbproc best practices

2016-10-04 Thread Holger Just
Hi Mariusz,

Mariusz Gronczewski wrote:
> we've come to the point when we have to start using nbproc > 1 (mostly
> because going SSL-only in coming months) and as I understand I have
> to bind each process to separate admin socket and then repeat every
> command for each process, and in case of stats also sum up the
> counters.

For statistics, there exists a LUA script you can use in HAProxy which
aggregates the statistics of multiple processes. See
http://www.arpalert.org/haproxy-scripts.html#stats

As for socket commands, often you can circumenvent the whole issue by
applying a multi-stage architecture where you have several "dumb"
processes just terminating SSL and forwarding the plain-text traffic to
a single HAProxy processes which performs all of the actual
loadbalancing rules.

With clever bind-process rules and by using send-proxy-v2 this is pretty
workable. Often, there is then no need for close introspection of the
frontend-processes anymore, nor is there a need to send socket commands
to them since they always send all their traffic to haproy anyway

Best,
Holger



Re: I cannot %[lua.xxx] in server directive

2016-07-13 Thread Holger Just
Hi Takada,

Takada Shigeomi wrote:
> global
>   lua-load get_backend.lua
> 
> listen example
>   mode tcp
>   bind :3-5
>   server MYSERVER %[lua.backend]
> ---
> 
> ---ERROR CONTENT--
> [ALERT] 194/145111 (21636) : parsing [haproxy.cfg:20] : 'server MYSERVER' : 
> invalid address: '%[lua.backend]' in '%[lua.backend]
> ---

You can't dynamically define arbitrary server hosts. The target needs to
be fixed, e.g. a fixed IP address or a fixed hostname. Thus, you can't
select an arbitrary target server for each incoming connection.

What you can do however is to define a number of backends, each
containing a fixed server and dynamically selecting the backend where
the request should be routed to using something like

global
  lua-load get_backend.lua

frontend example
  mode tcp
  bind :3-5

  use_backend example_%[lua.backend]

backend example_3
  server MYSERVER 127.0.0.1:3

backend example_30001
  server MYSERVER 127.0.0.1:30001

...

Alternatively, if the IP address if always the same, you can use the
same port as the incoming connection by using something like this and
omit the port (or specify a delta value)

frontend example
  mode tcp
  bind :3-5

  use_backend bk_example

backend bk_example
  server MYSERVER 127.0.0.1

Regards,
Holger



Re: counters for specific http status code

2016-07-13 Thread Holger Just
Hi Willy,

Willy Tarreau wrote:
>> At first I was thinking whether we could track the response status in stick
>> table, then it may be neat. but currently there isn't `http-response
>> track-sc?` directive. can it?
> 
> Interesting. No it isn't, just because I think we never found a valid
> use case for it. It's possible that you found the first one in fact :-)

Having this capability would also solve a long-standing itch for myself.

We have some authenticated services (via Basic Auth and other means)
which signal an authentication failure via a 403 status. We want to
throttle and finally block IPs which cause too many authentication
failures. Stick tables would be great for that as long as we could store
the response status to use it in subsequent requests.

I think, right now, we could build a crutch with `http-response
sc-inc-gpc0` but having real http-response track-sc actions would make
thing much easier and cleaner.

As such, I'm all in favor of this :)

Regards,
Holger



Re: Refuse connection if no certificate match

2016-06-22 Thread Holger Just
Hi Olivier,

Olivier Doucet wrote:
> Is there a way to not present the first loaded certificate and refuse
> connection instead ?

You can use the strict-sni argument on the bind line to force the client
to speak SNI and refuse the TLS handshake otherwise.

See the documentation for details at

http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.1-strict-sni

--Holger



Re: 100% cpu , epoll_wait()

2016-06-10 Thread Holger Just
Hi Willy et al.,

> Thank you for this report, it helps. How often does it happen, and/or after
> how long on average after you start it ? What's your workload ? Do you use
> SSL, compression, TCP and/or HTTP mode, peers synchronization, etc ?

Yesterday, we upgraded from 1.5.14 to 1.5.18 and now observed exactly
this issue in production. After rolling back to 1.5.14, it didn't occur
anymore.

We have mostly http traffic, little TCP with about 100-200 req/s, about
2000 concurrent connections over all. About all traffic is SSL
terminated. We use no peer synchronization and no compression.

An strace on the process reveals this (with most of the calls being
epoll_wait):

[...]
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {{EPOLLIN, {u32=796, u64=796}}}, 200, 0) = 1
read(796, "
\357\275Y\231\275'b\5\216#\33\220\337'\370\312\215sG4\316\275\277y-%\v\v\211\331\342"...,
5872) = 1452
read(796, 0x9fa26ec, 4420)  = -1 EAGAIN (Resource
temporarily unavailable)
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
epoll_wait(0, {}, 200, 0)   = 0
[...]

The strace was done after reloading using -sf. However, the process was
at 100% load even before the reload.

Since we kept the process running after the reload (it still holds some
connections), I was able to run a second strace, about half an hour
later which now show a different behavior:

[...]
epoll_wait(0, {}, 200, 4)   = 0
epoll_wait(0, {}, 200, 7)   = 0
epoll_wait(0, {}, 200, 3)   = 0
epoll_wait(0, {}, 200, 6)   = 0
epoll_wait(0, {}, 200, 3)   = 0
epoll_wait(0, {}, 200, 3)   = 0
epoll_wait(0, {}, 200, 10)  = 0
epoll_wait(0, {}, 200, 3)   = 0
epoll_wait(0, {}, 200, 27)  = 0
epoll_wait(0, {}, 200, 6)   = 0
epoll_wait(0, {}, 200, 4)   = 0
[...]

The CPU load taken by the process is now back to more or less idle load,
without further intervention on the process.

`haproxy -vv` of the process running into the busy-loop shows

HA-Proxy version 1.5.18 2016/05/10
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1t  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.1t  3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Unfortunately, since we have rolled back production to 1.5.14, we have
now little possibility to reproduce this anymore. The process which
shows the behavior is still running for the time being though.

Regards,
Holger



Re: external-check error ??

2016-06-09 Thread Holger Just
Hi Hugo

Hugo Delval wrote:
> global
> # [...]
> chroot /var/lib/haproxy
>
> # [...]
> 
> backend web-backend
> balance roundrobin
> option external-check
> external-check path "/usr/bin:/bin:/tmp"
> external-check command /bin/true
> server web1 127.0.0.1:80 check
> server web2 127.0.0.1:81 check


You are configuring HAProxy to drop into a chroot directly after start.
Thus, any scripts or external tools  (including all its required
libraries and potentially device nodes) it runs have to be inside the
chroot directory. In your case, this is probably not the case.

Thus, you can either get rid of the chroot completely or move all your
dependencies into the chroot. The latter is probably a bit of a hassle
for more complex checks but might be more secure.

Good luck,
Holger



Invalid 301 redirect target URL on haproxy.org

2016-06-09 Thread Holger Just
Hi,

when navigating to a directory of the downloads section on haproxy.org
while omitting the trailing slash, e.g.

http://www.haproxy.org/download/1.5

the response is a 301 redirect to

http://www.haproxy.org:81/download/1.5/

which I assume is generated by the backend Apache by adding its internal
port to the generated URL. This could potentially be solved by adding
this config rule in the frontend HAproxy to drop any explicit port
number from the redirects (or any other way you see fit :)

http-response replace-header Location ^(https?://[^:]*):\d+/(.*) \1/\2

Regards,
Holger



Re: use env variables in bind for bind options

2016-05-20 Thread Holger Just
Hi Aleks,

Aleksandar Lazic wrote:
> My conclusion is that with or without " the ${...} is not substituted,
> at least in the bind line.

>From your output, it looks like you are using an older version of
HAProxy. The behavior of quoted strings in the config changed in HAProxy
1.6. It appears you are using an older version (e.g. 1.5) which does
indeed not support this syntax.

That said, even on HAProxy 1.5.14, I have been able to validate your
syntax (there without the quotes).

Please ensure you are using a resonably up-to-date version of HAProxy
(which you can verify with `haproxy -vv`) and that you actually set all
used environment variables with their respective values when starting
HAProxy.

The last one is crucial as HAProxy does not replace environment
variables in the config file if the environment variable is not actually
defined. From your original output, it appears you are not defining the
${ROUTER_SERVICE_HTTPS_PORT_BIND_OPTONS} variable in the environment
which thus results in the parse error.

Regards,
Holger



Re: use env variables in bind for bind options

2016-05-20 Thread Holger Just
Hi Aleks,

Aleksandar Lazic wrote:
> ### bind :${ROUTER_SERVICE_HTTP_PORT} 
> ${ROUTER_SERVICE_HTTP_PORT_BIND_OPTONS} ###
> 
> It's look to me that this is not possible.

To quote from Section 2.3 of configuration.txt:

> Those variables are interpreted only within double quotes. Variables 
> are expanded during the configuration parsing. Variable names must be
> preceded by a dollar ("$") and optionally enclosed with braces ("{}")
> similarly to what is done in Bourne shell.

Thus, it should work once you enclose your bind values into double
quotes (without the potential linebreak added by my mail client):

bind ":${ROUTER_SERVICE_HTTP_PORT}"
"${ROUTER_SERVICE_HTTP_PORT_BIND_OPTONS}"

This will however prevent you from setting multiple (space-separated)
bind options as they will only be recognized as a single value due to
the quotes.

Regards,
Holger



Re: RFC: Statically enable SSL_OP_SINGLE_DH_USE

2016-02-09 Thread Holger Just
Hi Lukas,

Lukas Tribus wrote:
>>> I don't see it. Can you please elaborate what exact commit ID your are
>>> refering to?
>> You are probably refering to the github fork, which is as always outdated,
>> and where line 2539 points to the local definition of SSL_OP_SINGLE_DH_USE:
>> #ifndef SSL_OP_SINGLE_ECDH_USE
>> #define SSL_OP_SINGLE_ECDH_USE 0
>> #endif
> 
> Actually I mixed up SSL_OP_SINGLE_DH_USE with SSL_OP_SINGLE_ECDH_USE, but
> the point is still the same.

Ah, now it becomes clear. The #defines are just to set defaults if the
constant isn't defined at all. Normally, the constant is defined by
OpenSSL itself. As HAProxy sets it in [1], everything should be fine.

Sorry for the noise from someone who reads way too little C code...

Regards,
Holger

[1]
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/src/ssl_sock.c;h=5cec6a4cd6ce5d16f9564e60fa57b24c46112bac;hb=HEAD#l2560



Re: RFC: Statically enable SSL_OP_SINGLE_DH_USE

2016-02-09 Thread Holger Just
Hi Lukas,

Lukas Tribus wrote:
> I don't see it. Can you please elaborate what exact commit ID your are
> refering to?

I was looking at
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/src/ssl_sock.c;h=5cec6a4cd6ce5d16f9564e60fa57b24c46112bac;hb=HEAD#l2539

> As far as I an see we do the exact opossite of what you are saying
> (enabling SSL_OP_SINGLE_DH_USE unconditionally).

Sorry, it might well be that I'm too unfamiliar with the way these flags
work. I assumed that setting it to 0 disabled the flag. If this is
actually not the case, that I hereby retract by request and resolve to
read more about how this is intended to work...

Regards,
Holger



RFC: Statically enable SSL_OP_SINGLE_DH_USE

2016-02-09 Thread Holger Just
Hi there,

following CVE-2016-0701, the OpenSSL project switched the behavior of
the SSL_OP_SINGLE_DH_USE flag to a no-op and forcefully enabled the
feature. This results in OpenSSL always generating a new DH parameters
for each handshake which can protect the private DH exponent from
certain attacks.

In HAProxy, this flag is currently statically disabled by default in
src/ssl_sock.c line 2539. Thus, when used with older OpenSSL versions
than 1.0.1r or 1.0.2f, users could be vulnerable.

I propose to enable this flag in HAProxy to be safe with older OpenSSL
versions, e.g. when users are unable to update their OpenSSL library due
to system constraints. On current OpenSSL versions, this switch is a
no-op anyway.

Regards,
Holger

PS: I'm not too familiar with actually working with OpenSSL itself or
its various pitfalls. Thus, it is probably a good idea for someone more
familiar with OpenSSL to have a critical look at this request.

References:
* OpenSSL Security Advisory: http://openssl.org/news/secadv/20160128.txt
* Fix in OpenSSL 1.0.1:
https://github.com/openssl/openssl/commit/8bc643efc89cbcfba17369801cf4eeca037b6cc1
* Fix in OpenSSL master:
https://github.com/openssl/openssl/commit/ffaef3f1526ed87a46f82fa4924d5b08f2a2e631




Re: http_date converter gives wrong date

2016-01-22 Thread Holger Just
Hi,

Gregor Kovač wrote:
> The problem I have here is that Expires should be Friday and not Saturday.

This is indeed a bug in HAProxy as it assumes the weekday to start on
Monday instead of Sunday. The attached patch fixes this issue.

The patch applies cleanly against master and 1.6.


Regards,
Holger
From 32cf0c931f0c4bfd3ea687aa7399e4f95626b6ad Mon Sep 17 00:00:00 2001
From: Holger Just <he...@holgerjust.de>
Date: Fri, 22 Jan 2016 19:23:43 +0100
Subject: [PATCH] BUG/MINOR: Correct weekdays in http_date converter

Days of the week as returned by gmtime(3) are defined as the number of
days since Sunday, in the range 0 to 6.
---
 src/proto_http.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index e362a96..2f76afe 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -11973,7 +11973,7 @@ int val_hdr(struct arg *arg, char **err_msg)
  */
 static int sample_conv_http_date(const struct arg *args, struct sample *smp, 
void *private)
 {
-   const char day[7][4] = { "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", 
"Sun" };
+   const char day[7][4] = { "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", 
"Sat" };
const char mon[12][4] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", 
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec" };
struct chunk *temp;
struct tm *tm;
-- 
2.6.4



Re: HAProxy does not write 504 on keep-alive connections

2015-11-11 Thread Holger Just
Hi,

Willy Tarreau wrote:
> As explained above, it's because a keep-alive enabled client must implement
> the ability to replay requests for which it didn't get a response because
> the connection died. In fact we're forwarding to the client what we saw on
> the server side so that the client can take the correct decision. If your
> client was directly connected to the server, it would have seen the exact
> same behaviour.

One of the problems we saw when trying to reproduce this issue was that
some clients I tried (i.e. curl 7.45.0 and ruby's httpclient gem)
silently replayed any requests for which they didn't receive an answer.

This can result duplicated POSTs on the backend servers. Often, servers
continue to handle the first POST, even after HAProxy closed the backend
connection because its server timeout stroke. Now, the client just
replays the POST again resulting in potentially fatal behavior.

If I understand the HTTP specs correctly, this replay is correct from
the clients perspective as they can't know if they are speaking to a
loadbalancer or an origin server directly.

As a loadbalancer however, HAProxy should always return a proper HTTP
error if the request was even partially forwarded to the server. It's
probably fine to just close the connection if the connect timeout stroke
and the request was never actually handled anywhere but it should
definitely return a real HTTP error if its the sever timeout and a
backend server started doing anything with a request.

You could probably argue to differentiate between safe and unsafe
methods and to also just close for safe ones but is probably even more
confusing and has the potential for subtle bugs.

Best,
Holger



http://haproxy.org points to http://1wt.eu

2015-11-10 Thread Holger Just
Hi Willy,

It seems that the loadbalancer or DNS configuration of haproxy.org is
broken right now. When navigating to http://haproxy.org, only Willy's
personal website, normally reachable at http://1wt.eu is returned.

haproxy.org currently resolves to 195.154.117.161 and
2001:7a8:363c:2::2. The issue can be observed on both IPs.

Unfortunately, this results in the website and the the source code
downloads to be unavailable. git.hapoxy.org seems to work fine though.]

Could you have a look please?

Thanks,
Holger



Re: http://haproxy.org points to http://1wt.eu

2015-11-10 Thread Holger Just
Hi Willy,

Willy Tarreau wrote:
> Some virtual host routing needs to be fixed there. For now the PSU was
> replaced and everything's OK.

Thanks for the quick turnaround! A+ support. Would buy again :)

Best,
Holger



Re: ha-proxy strange behavior with check localhost option

2015-08-10 Thread Holger Just
Hi BLN,

bln prasad wrote:
 I'm not sure why health check is failing if it's localhost on few
 systems and this is observed with only 1.5.14 version.
 ideally there should not be any difference between localhost and
 127.0.0.1 right.

Localhost can resolve to several different IPs, including

* any in the network of 127.0.0.0/8, most commonly 127.0.0.1
* ::1 (the defined localhost address in IPv6)

The most common issue here is indeed the difference between IPv4 and
IPv6. If your server is IPv6 aware, HAProxy might try to connect to ::1
only. If your service only listens on 127.0.0.1 however, this will
probably break.

Also, some systems of other local IP addresses than 127.0.0.1 in their
configuration, E.g. some Debian and Ubuntu distributions define the
fixed local hostname as 127.0.1.1. This was added in order to work
around some configuration issues. See
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=316099 for details.

You should have a look at your /etc/hosts file to check this. Generally
(at least right now), it is always a good idea to specify full IP
addresses instead of hostnames in the HAProxy configuration. As
specified hostnames are resolved only once during startup (in HAProxy
1.5), you are thus save from hard to debug issues stemming from
different opinions about a hostname-ip mapping in DNS and in your
running HAProxy processes.

Regards,
Holger



Re: Updating a stick table from the HTTP response

2015-06-30 Thread Holger Just
Hello all,

Unfortunately, I have not received any feedback on my earlier email so
I'll try again.

I am still struggling trying to implement a throttling mechanism based
on prior HTTP responses of the same client.

Basically, I have some requests (using Basic Auth or other mechanisms)
that might result in HTTP 401 responses from the backend server. I want
to throttle further requests by the same client if the number of such
responses reaches a certain threshold (responses / second). In that
case, I want to first delay and finally block new requests completely
until the error rate goes down.

In order to implement this, it would be great if we would have the
possibility to update stick entries based on the response and not only
the request, e.g.

tcp-response content track-sc2 src if { status 401 }

Is something like this feasible, maybe under the restriction that the
user has to make sure on their own that the data required to update the
stick table entry is still available?

Thank you for your feedback.

--Holger


Holger Just wrote:
 Hello all,
 
 with HAProxy 1.5.11, we have implemented rate limiting based on some
 aspects of the request (Host header, path, ...). In our implementation,
 we delay limited requests by forcing a WAIT_END in order to prevent
 brute-force attacks against e.g. passwords or login tokens:
 
 
 acl bruteforce_slowdown sc2_http_req_rate gt 20
 acl limited_path path_beg /sensitive/stuff
 
 stick-table type ip size 100k expire 30m store http_req_rate(300s)
 tcp-request content track-sc2 src if METH_POST limited_path
 
 # Delay the request for 10 seconds if we have too many requests
 tcp-request inspect-delay 10s
 tcp-request content accept unless bruteforce_slowdown limited_path
 tcp-request content accept if WAIT_END
 
 
 As you can see above, we track only certain requests to sensitive
 resources and delay further requests after 20 req / 300 s without taking
 the actual response into account. This is good enough for e.g. a web
 form to login or change a password.
 
 Now, unfortunately we have some endpoints which are protected with Basic
 Auth which is validated by the application. If the password is
 incorrect, we return an HTTP 401.
 
 In order to prevent brute-forcing of passwords against these endpoints,
 we would like to employ a similar delay mechanism. Unfortunately, we
 can't detect from the request headers alone if we have a bad request but
 have to inspect the response and increase the sc2 counter only of we
 have seen a 401.
 
 In the end, I would like to use a fetch similar to sc1_http_err_rate but
 reduced to only specific cases, i.e. 401 responses on certain paths or
 Host names.
 
 Now the problem is that we apparently can't manipulate the stick table
 from a HTTP response, or more precisely: I have not found a way to do it.
 
 We would like to do something like
 
 
 tcp-response content track-sc2 src if { status 401 }
 
 
 which would allow us to track these error-responses similar to the first
 approach and handle the next requests the same way as above.
 
 Now my questions are:
 
 * Is something like this possible/feasible right now?
 * Is there some other way to implement rate limiting based on certain
   server responses?
 * If this is not possible right now, would it be feasible to implement
   the possibility to track responses similar to what is possible with
   requests right now?
 
 Thank you for your feedback,
 Holger Just
 



Re: Choosing servers based on IP address

2015-06-04 Thread Holger Just
Hi Andy,

Please always CC the mailing list so that others can help you too and
can learn from the discussion.

Franks Andy (IT Technical Architecture Manager) wrote:
 Hi Holger,
   Sorry, I will elaborate a bit more!
 We are going to implement Microsoft exchange server 2010 (sp3) over two
 AD sites. At the moment we have two servers, one at each site.
 With a two site AD implementation with out-of-the-box settings, even if
 the two sites are connected via a decent link, clients from site A are
 not permitted to use the interface to the database (the CAS) at site B
 to get to the database at site A, unless the whole site is down.
 I would like to have 2 load balancing solutions - one at each site with
 a primary connection to the server at same site, but then a failover if
 that server goes down.
 That's all fine, but it would be ideal if we had a load balancing
 solution that could take connections from site A and route them to the
 server at site B in normal situations too with some logic that said If
 client is from IP x.x.x.x, then always use server B rather than A/B
 depending on the hard coded weight.
 It would open up lots more DR recovery potential for a multiple site
 like this. Thinking about it, I can't really understand why it's not
 done more - redirecting based on where something is coming from.. You
 could redirect DMZ traffic one way and ordinary another without
 complicated routing.
 Am I missing a trick?
 Thanks
 Andy

If I understood you right, you have two sites, each with an Exchange
server and some clients. You normally want the clients on Site A to only
connect to EXCH-A (exchange server at Site A). However, if the server is
down, you want toe clients on Site A to connect to the exchange server
on Site B instead.


SITE A|SITE B
--+
  |
Client-1A ---,|   ,--- Client-2A
  \   |  /
Client-1B -- HAPROXY -+ HAPROXY -- Client-2B
  /   \\  | //   \
Client-1C ---'   EXCH-A   |  EXCH-B   `--- Client-2C
  |

This is easily possible with a backend section where one server is
designated as a backup server which will thus only used if all
non-backup-servers are down:

backend SMTP-A
  server exch-a 10.1.0.1:25 check
  server exch-b 10.2.0.1:25 check backup

With this config, the primary server (exch-a) is used for all
connections. If it is down, the backup server exch-b is used until
exch-a is up again.

Now, in order to route clients from Site B to their own exchange, even
if they arrive on the HAproxy from Site A, you can define an additional
backend with flipped roles:

backend SMTP-B
  server exch-a 10.1.0.1:25 check backup
  server exch-b 10.2.0.1:25 check

you can then route requests in the frontend to the appropriate backend
based on the source IP:

frontend smtp
  bind :25

  acl from-site-a src 10.1.0.0/16
  acl from-site-b src 10.2.0.0/16

  use_backend SMTP-A if from-site-a
  use_backend SMTP-B if from-site-b
  default_backend SMTP-A

I hope, this is clear. Please read the configuration manual regarding
additional server options which can affect stickiness and handling of
existing sessions on failover:

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2

Regards,
Holger



Re: Choosing servers based on IP address

2015-06-03 Thread Holger Just
Hi Andy,

Franks Andy (IT Technical Architecture Manager) wrote:
   Quick question – can anyone think of a way to change a server’s weight
 based on some criteria, for example source IP address? It would be so
 useful when dealing with a common service that has two distinct sites,
 and rules in place that stop access to resources from the“wrong” site,
 like Exchange (where you can’t access your mailbox from the wrong
 site-based CAS server).

I'm not really sure what you are /actually/ trying to achieve.
Generally, the weight of a server is used to determine which percentage
of requests should go to that server.

However, from your description, it seems you want to completely disallow
certain requests/connections based on some criteria. In this case, it
would make more sense to use http-request deny (or tcp-request deny)
rules using ACLs describing your rules.

Using different weights per client (e.g. sending 25% of requests from an
IP to one server and 75% to a different server) seems rather strange as
you still would have to provide all resources on both servers. In this
case, a globally uniform distributions sounds much more appealing,
doesn't it?

Regards,
Holger



Updating a stick table from the HTTP response

2015-04-29 Thread Holger Just
Hello all,

with HAProxy 1.5.11, we have implemented rate limiting based on some
aspects of the request (Host header, path, ...). In our implementation,
we delay limited requests by forcing a WAIT_END in order to prevent
brute-force attacks against e.g. passwords or login tokens:


acl bruteforce_slowdown sc2_http_req_rate gt 20
acl limited_path path_beg /sensitive/stuff

stick-table type ip size 100k expire 30m store http_req_rate(300s)
tcp-request content track-sc2 src if METH_POST limited_path

# Delay the request for 10 seconds if we have too many requests
tcp-request inspect-delay 10s
tcp-request content accept unless bruteforce_slowdown limited_path
tcp-request content accept if WAIT_END


As you can see above, we track only certain requests to sensitive
resources and delay further requests after 20 req / 300 s without taking
the actual response into account. This is good enough for e.g. a web
form to login or change a password.

Now, unfortunately we have some endpoints which are protected with Basic
Auth which is validated by the application. If the password is
incorrect, we return an HTTP 401.

In order to prevent brute-forcing of passwords against these endpoints,
we would like to employ a similar delay mechanism. Unfortunately, we
can't detect from the request headers alone if we have a bad request but
have to inspect the response and increase the sc2 counter only of we
have seen a 401.

In the end, I would like to use a fetch similar to sc1_http_err_rate but
reduced to only specific cases, i.e. 401 responses on certain paths or
Host names.

Now the problem is that we apparently can't manipulate the stick table
from a HTTP response, or more precisely: I have not found a way to do it.

We would like to do something like


tcp-request content track-sc2 src if { status 401 }


which would allow us to track these error-responses similar to the first
approach and handle the next requests the same way as above.

Now my questions are:

* Is something like this possible/feasible right now?
* Is there some other way to implement rate limiting based on certain
  server responses?
* If this is not possible right now, would it be feasible to implement
  the possibility to track responses similar to what is possible with
  requests right now?

Thank you for your feedback,
Holger Just



Re: how to sync HaProxy config with ZooKeeper

2014-07-10 Thread Holger Just
Hi,

Зайцев Сергей Александрович wrote:
 So the question is - is the a way to synchronized HaProxy's
 configuration with ZooKeeper ( somehow ).

Airbnb uses a tool called Synapse [1] as part of their Smartstack
platform [2]. It integrates HAProxy and zookeeper to provide high
availability by using node-local loadbalancers that get reconfigured on
the fly according to data in zookeeper.

Synapse provides an external watcher program which reconfigures HAProxy
using both the unix socket (where possible) as well as by generating
updated config files. To try it out, you could use the provided
smartstack Chef cookbook [3].

Patrick Viet (formerly of Airbnb, now at GetYourGuide) recently talked
about Smartstack (and the adaptations they have done at GetYourGuide,
i.e. changing from Zookeeper to Serf) at the Berlin Devops Meetup. You
can find the video on Youtube [4].

Maybe, you could gather some ideas and implementation details from their
solution.

Regards,
Holger

[1] https://github.com/airbnb/synapse
[2] http://nerds.airbnb.com/smartstack-service-discovery-cloud/
[3] https://github.com/airbnb/smartstack-cookbook
[4] https://www.youtube.com/watch?v=y739V9MMoLE



Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Holger Just
Hi Bernhard,

Bernhard Weißhuhn wrote:
 When downloading the tar.gz, the chef client sends :80 as part of the host 
 header (which is legal from my understanding of the rfc).
 This header reliably results in a 404, whereas leaving out the port number 
 results in a successful download:

This happens because chef creates an unusual Host-header for its
remote_file resources right now. This currently doesn't only break for
haproxy.org but for many other services.

The issue is fixed in the chef master branch already [1]. To use the fix
right now, you can add a monkey-patch into one of your cookbooks which
patches chef's core with the fix [2]. The linked gist is a direct
translation of the patch. I use this currently in production if that
matters to you.

Regards,
Holger

[1] https://github.com/opscode/chef/pull/1471
[2] https://gist.github.com/meineerde/83e044c709b94358a616



Re: HAProxy Next?

2013-12-17 Thread Holger Just
Annika Wickert wrote:
 - Include possibility in configfile to maintain one configfile for each
 backend / frontend pair

There are several scripts out there which concat files in a well-known
directory structure together to form a single final config file. These
can be used in your init script just before starting HAProxy itself.

I found this approach very versatile as it allows me to structure my
configs the way I want. The only thing that doesn't easily work with
this approach is to include one file in multiple places, although that
too could be solved with symlinks. And I never actually missed that
functionality.

Finally, HAProxy accepts multiple -f arguments arguments to load
multiple files in its own.

--Holger

[1]
https://github.com/meineerde-cookbooks/haproxy/blob/master/files/default/haproxy_join
[2] https://github.com/joewilliams/haproxy_join
[3] https://github.com/finnlabs/haproxy/blob/master/haproxy-config.py



Randomly added byte in GET request line with HAProxy 1.5 + OpenSSL

2013-06-14 Thread Holger Just

Hello all,

we see some strange errors in our logs after having introduced HAProxy 
1.5 snapshot 20130611 before our nginx.


It seems like HAProxy sometimes (seldom) inserts a rather random byte as 
the second byte of a GET request line on SSL requests. Some (anonymized) 
log lines follow:


1.1.1.1:30893 [13/Jun/2013:08:41:50.443] front~ master/gemini 
369/0/0/500/869 500 817 - -  3/2/0/0/0 0/0 GNET /login HTTP/1.1
2.2.2.2:50771 [13/Jun/2013:16:03:17.488] front~ special/gemini 
184/0/0/-1/184 502 4410 - - PH-- 0/0/0/0/0 0/0 G3ET /foo HTTP/1.1
3.3.3.3:37310 [13/Jun/2013:16:13:52.495] front~ master/gemini 
911/0/0/-1/911 502 4410 - - PH-- 0/0/0/0/0 0/0 GqET / HTTP/1.1


and more of that. Inserted characters that I have seen include

A J H I U Q N 3 % ~ + ! $ . ' o z q

They are always inserted before the E in GET. We have only seen this 
behavior on GET requests. All other HTTP verbs are completely unaffected.


I can reproduce this error every time with the following conditions:
* HAProxy is compiled with a self-compiled openssl 1.0.1d
* The client is an IE on Windows 7

Other browsers don't show this issue. Also, when I compile HAProxy 
against the default OpenSSL 0.9.8o in Debian Squeeze, it works fine too.


I can reproduce the issue with even the most simple (ssl-) configs, on 
the current snapshot, dev18 and dev17.


I'm a bit worried that this might be the symptom of a larger issue. But 
it might just be that I'm not competent enough to compile my own 
OpenSSL. I would appreciate, if someone could give me some input here.


# uname -a
Linux gemini 2.6.32-5-amd64 #1 SMP Fri May 10 08:43:19 UTC 2013 x86_64 
GNU/Linux


# cat /etc/debian_version
6.0.7

I compiled openssl 1.0.1d with

./config no-idea no-mdc2 no-rc5 zlib enable-tlsext no-ssl2 
--openssldir=/opt/haproxy/openssl

make
make test
make install

Haproxy is compiled as follows (using 
https://github.com/meineerde-cookbooks/haproxy/blob/master/recipes/source.rb): 



# haproxy -vv
HA-Proxy version 1.5-dev18 2013/04/03
Copyright 2000-2013 Willy Tarreau w...@1wt.eu

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3.4
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1d 5 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1d 5 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.02 2010-03-19
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

The full make line is this:

make TARGET=linux2628 USE_PCRE=1 CPU=generic ARCH=x86_64 
PREFIX=/opt/haproxy/haproxy USE_OPENSSL=1 USE_ZLIB=1 
PCREDIR=/opt/haproxy/openssl/lib -L/usr DEFINE= 
SILENT_DEFINE=-I/opt/haproxy/openssl/include ADDLIB=-lz -ldl ADDINC=


Any hints or help would be greatly appreciated.

Regards,
Holger



Re: Randomly added byte in GET request line with HAProxy 1.5 + OpenSSL

2013-06-14 Thread Holger Just

Hi Lukas,

Lukas Tribus wrote:

sounds like a tricky issue ...


indeed :)


- has the Windows 7 box all the latest patches from MS?


Yes.


- any reason not to use openssl1.0.1e?


I couldn't get it to compile, or in fact, I could compile it, but it 
would break at the `make test` step and I hadn't yet found time to get 
to the bottom of this.



- any security software (suites, software firewalls, anti-virus)
   which may intercept the SSL/TLS session (basically: do you see your
   real certificate in the browser or do you see a certificate of a
   security product)?


There is a simple iptables on the box. By policy, we don't deploy any 
magic security snake oil, so no, nothing of that kind between the client 
and HAProxy. The browser is talking directly to HAProxy.



- could you reproduce this with a self-signed certificate you *don't* use
   in production (so that the private key can be disclosed for
   troubleshooting), tcpdump the ssl session and provide the capture,
   including the private server certificate?


I'll have to reconstruct this on a local VM to anonymize the data a bit. 
I'll get back to you as soon as possible.


Thanks for your support.

--Holger



Re: AW: haproxy in cluster with pacemaker and corosync

2013-03-25 Thread Holger Just

Hi there,

Wolfgang Routschka wrote:

One question about the script. What means config in line 20
HAPROXY_CONFIG=/usr/local/sbin/haproxy-config.py

Configurationfile is setting on line 17


the haproxy-init script in that repo is basically the init script from
the HAProxy Debian package from iirc. Debian Lenny. I just adapted it to
create a monolithic config file from a couple of smaller files during
reload with haproxy-config.py.

The whole point of the repo is in fact the haproxy-config.py script. If
you just want an init script, you could directly use the unmodified init
script from Debian (which you can simply extract from the package). If
you are on a RedHat-like system,. you can also use the init script
shipped with the HAProxy tar.gz in examples/haproxy.init

--Holger



Re: Backend Configuration Templating

2013-02-06 Thread Holger Just

Hi

Michael Glenney wrote:

We do something similar with chef where we've turned each backend
config associated with an application into json and can dynamically
build configs based on an application list. Completely avoiding using
a template.


In my HAProxy Chef cookbook[1], I have defined resources to generate 
sections (global, default, listen, frontend, backend) into individual 
files from custom templates. This is because I found configurations to 
be too complex to be able to transform them into a tree structure 
without loosing flexibility (and understandability).


These section files are then concatenated to a single config file during 
reload using a simple shell script[2].


Using this technique, the user of the cookbook is able to write 
customized templates which can e.g. loop over search results and 
dynamically create complete configs similar to what Robin Lee Powell showed.


Alternatively, in Chef 11 which was released about two days ago, you can 
create directly create a single config file from partial templates 
without having to first create single files.


If you don't use Chef, there are a couple of other scripts which concat 
your partial config into a full config file. One by me [3] written in 
python, and another in Ruby by Joe Williams[4]. These are intended to be 
used as part of an init script. An example can be found in the repo of [3].


--Holger

[1] https://github.com/meineerde-cookbooks/haproxy
[2] 
https://github.com/meineerde-cookbooks/haproxy/blob/master/files/default/haproxy_join 


[3] https://github.com/finnlabs/haproxy
[4] https://github.com/joewilliams/haproxy_join



Inaccurate message for errors on bind parsing

2012-10-24 Thread Holger Just

Hi there,

after half a day of debugging (and subsequently kicking myself), I 
finally noticed that whenever HAProxy (1.5-dev12 in this case) 
encounters an unknown option on a bind line, it will error out with this 
message irregardless of OpenSSL being enabled or not:


[ALERT] 296/194609 (6625) : parsing [/etc/haproxy/haproxy.cfg:40] : 
'bind' only supports the 'transparent', 'accept-proxy', 'defer-accept', 
'name', 'id', 'mss', 'mode', 'uid', 'gid', 'user', 'group' and 
'interface' options.


I thought I went crazy, thinking somehow OpenSSL support would not 
properly compile on a certain system when I only misconfigured it. It 
would be awesome if you could fix that message the cfgparse.c to reflect 
the actually available options. Unfortunately, I'm not versed enough in 
writing C to fix it myself :(


--Holger



Re: haproxy content switching on url parameters?

2012-07-30 Thread Holger Just
On 2012-07-29 12:56, Reve wrote:
 How about parsing the same thing but if those have been posted as post, not 
 get.

When POSTing data, it will be transmitted in the request body. As the
body size can be of an arbitrary size and caching and potentially
parsing it would be a really complex, slow and potentially massively
resource intensive process, HAProxy doesn't do that at all.

That means, you can only use the HTTP headers for making a routing
decision. Once all headers are read, the request will be dispatched and
non of the following content will be prsed anymore but will be directly
forwarded (with very restricted exceptions of allowing the check of
binary data at fixed predefined byte offsets in the body).

That said, you will not be able to use ACLs for matching structured data
in the body right now. There are some areas that can optionally read
part of the body in 1.5-dev t least (e.g. some balance algorithms) but
it is not generically available right now.

That might change until the final 1.5 release but is at the discretion
of Willy who is currently rebuilding much of the ACL matching engine.

--Holger



Re: haproxy content switching on url parameters?

2012-07-29 Thread Holger Just
Reve,

On 2012-07-28 19:46, Reve wrote:
 let's say I have this URL
 /blah?x1=5x2=-5y1=-1y2=50

 I want to go to a different set of backends if 
 x10, y10 - backends set 1
 x10, y10 - backends set 2
 x10, y10 - backends set 3
 x10, y10 - backends set 4

You can't actually parse the URL and match the numbers as integers.
However, you can use match the URL as a string and apply regular
expressions for your conditions.

acl x1_lt_0 url_reg [?]x1=-\d+
acl x1_gt_0 url_reg [?]x1=[1-9][0-9]*
acl y1_lt_0 url_reg [?]y1=-\d+
acl y1_gt_0 url_reg [?]y1=[1-9][0-9]*

use_backend backend_set_1 if x1_lt_0 y1_lt_0
use_backend backend_set_2 if x1_lt_0 y1_gt_0
use_backend backend_set_3 if x1_gt_0 y1_lt_0
use_backend backend_set_4 if x1_gt_0 y1_gt_0

The ACLs match the simplest possible case. There are some cases where
they might match too much or too less, e.g. when your parameters can
start with a number and end with a string or when you have multiple
question marks. If required, you can adapt the regexes and make them
arbitrarily complex.

However, you should probably not uses these rules for securing the
access but only to make a routing decision. Your backend services should
then make sure that the provided URL parameters are actually sound.

For more information about ACL matching, have a look at the documentation at

* http://haproxy.1wt.eu/download/1.4/doc/configuration.txt (authoritative)
* http://cbonte.github.com/haproxy-dconv/configuration-1.4.html#7 (readable)

--Holger

(meh, missed the mailing list, resending there)



Re: Duplicate X-Forwarded-For

2012-02-01 Thread Holger Just
Hey,

On 2012-02-01 17:41, habeeb rahman wrote:
 When there is X-Forwarded-For added by the client(I used chrome rest
 client) I can see haproxy is sending two X-Forwarded-For to the backend
 instead of appending the values.
 One is client sent and the other one is the one haproxy created newly.To
 make sure I took capture and I see the duplicate one.
 Is this is bug or am I missing something?

You are missing something :) To cite from RFC 2616 (HTTP/1.1):

  Multiple message-header fields with the same field-name MAY be
  present in a message if and only if the entire field-value for
  that header field is defined as a comma-separated list [i.e.,
  #(values)]. It MUST be possible to combine the multiple header
  fields into one field-name: field-value pair, without changing
  the semantics of the message, by appending each subsequent
  field-value to the first, each separated by a comma. The order in
  which header fields with the same field-name are received is
  therefore significant to the interpretation of the combined field
  value, and thus a proxy MUST NOT change the order of these field
  values when a message is forwarded.


As both forms (comma separated and exploded into multiple headers) are
thus equivalent, HAProxy chooses the simplest implementation and just
appends a new header at the bottom of the headers list. Implementations
are expected to handle this the same as if it were a single header with
comma separated values.

Generally, it is a good idea to only trust those headers that you know
are trustworthy (e.g. set byHAProxy itself). Thus, a common
configuration is to delete all existing X-Forwarded-For headers on
arrival and just setting the single new header using something like

reqidel ^X-Forwarded-For:.*
option forwardfor

If you need the client-supplied list, you would have to merge the list
at your final HTTP server nevertheless.

--Holger



Re: Duplicate X-Forwarded-For

2012-02-01 Thread Holger Just
On 2012-02-01 20:00, habeeb rahman wrote:
 I know that apache comma separates the values for X-Forwarded-For and I
 thought haproxy behaves the same.

Both types are semantically the same. So for an application, it
shouldn't matter if you get these headers

X-Forwarded-For: 10.10.10.10
X-Forwarded-For: 192.168.1.1

or this one

X-Forwarded-For: 10.10.10.10, 192.168.1.1

Both formats have to be treated exactly the same according to the RFC.

What you now observe are two different implementations which both
conform to the RFC. Apache normalizes the X-Forwarded-For header to a
comma-separated list, HAProxy doesn't. Both variants are perfectly valid
and semantically (although not syntactically) identical.

Again, for your application to conform to the RFC, you have to accept
both types and have to parse them properly. Most application servers
already do that for you. If your doesn't, you have to parse it yourself.

 Our scenario is client app adds say X-Forwarded-For:10.10.10.10 and then
 haproxy also adds another header X-Forwarded-For:192.168.1.1
 
 so at the backend we can see 
 
 X-Forwarded-For:10.10.10.10
 X-Forwarded-For:192.168.1.1
 
 Does this match the clause mentioned?
 Just trying to make sure I understood it right :)

The important part of the quote was

  It MUST be possible to combine the multiple header fields into
  one field-name: field-value pair, without changing the
  semantics of the message, by appending each subsequent
  field-value to the first, each separated by a comma.

Ant hat is exactly what you see here. field-value in this case refers
to a comma-seprated list. It must be possible to combine headers with
multiple values into such a list, but it is not expected that it is
always done.

But if in doubt, please refer to the RFC yourself :)
http://tools.ietf.org/html/rfc2616 Section 4.2

--Holger



Re: Parsing Logs

2012-01-09 Thread Holger Just
Hi Joe,

On 2012-01-09 14:25, Joseph Hardeman wrote:
 I was wondering if anyone has a way to parse the logs and present them
 in a friendly format?  Such as with AWStats or another log parser.

There is Logstash [1] which includes patterns for parsing the HAProxy
HTTP log format. It can either store the logs itself in elasticsearch
has a reporting UI itself or can ship your logs anywhere you like. A
popular (and really awesome) choice is Graylog [2] which provides all
kinds or reporting and analytics on your logs.

If you want to use your existing log analyzing stack and don't need the
added information from the HTTP logs, you can use

option httplog clf

which generates Logs in the Common Log Format (clf) which is also used
by default by Apache. It is more compatible with inflexible log parsers
but gives you less information about requests than the default HTTP log
does.

[1] http://logstash.net/
[2] http://graylog2.org/

--Holger




Re: Using same health check result for multiple backends

2011-12-21 Thread Holger Just
Damien,

you can use the track keyword on the server line to define which server
to, well, track. Find an example below:

backend foo
  server foo1 1.2.3.4 check

backend bar
  server bar1 1.2.3.4 track foo/foo1

--Holger

On 2011-12-21 12:28, Damien Churchill wrote:
 Hi there,
 
 Apologies if this has been asked before but I'm unable to find
 anything on the matter. I'm load balancing a cluster of mail access
 nodes (HTTP/IMAP/POP3/SMTP) and setup a health check php page that the
 tcp protocols could use as well, however this means hitting the health
 page for each of the setup backends. I was wondering if it's possible
 to use the same health check result for multiple backends?
 
 Thanks in advance!
 
 Damien
 




Re: Autoscaling in haproxy with persistence sessions

2011-11-07 Thread Holger Just
On 2011-11-07 21:32, Erik Torlen wrote:
 If you get a burst against 3 active backend servers they will take
 care of all the request and connections. The clients that are active 
 will then get a persistence sessions against 1 of these 3 servers. It
 will take ~5min to scale up a new server so during that period more
 clients could come in and the 3 backend would then be even more
 overloaded.

You should take care to not overload your backend servers in the
first place. The connection limits can be finely tunes your each backend
server. Requests exceeding the limits are queued which will prevent your
servers from getting overwhelmed and dieing, usually taking others with it.

Generally, I think you should make sure that your service is not getting
overwhelmed by starting new instances earlier so you can actually handle
the traffic. But in the end, I think it depends on how important session
locality is for your service, i.e. which of those you can accept
earlier: broken session locality or slightly delayed responsed due to
queing.

--Holger



Re: Cookie persistence

2011-10-17 Thread Holger Just
On 2011-10-17 14:48, Ist Conne wrote:
 HAProxy is supported cookie-based persistence.
 But, cookie-based Load balancing has a patented F5 Networks.
 http://www.google.com/patents/about?id=3MYLEBAJ

Without being a lawyer, I'd play the prior art card as HAProxy supported
cookie based persistence since 1.0 which dates prior to the patent filling.

That said, I think the patent might actually be a nuisance which might
produce some serious costs and headaches if F5 is determined to enforce
it but from my point of view it will not stand.

Patents... MEH!

--Holger



Re: Haproxy -v 1.4.18 and amazon rds

2011-10-14 Thread Holger Just
Rhys,

HAProxy resolves IPs of backend servers only once during startup. As new
EC2 instances get an new IP on every startup, HAProxy doesn't find your
new instance. Because of that, it is generally discouraged to use
hostnames in backend sepcifications.

You have basically two ways to solve that:

* You can restart HAProxy to force a re-resolution of DNS names
* You can use elastic IPs for your servers which gives them static IPs.

And if I were you, I'd use elastic IPs as dynamic IPs on servers gives
me bad headaches personally...

--Holger

On 2011-10-14 10:57, Rhys Powell wrote:
 Hello all,
 
 Have a problem at the moment that I just cant seem to fix. We currently
 run a master RDS instance but between the hours of 8am and 6pm we spawn
 a read only replica up so that there is no affect on writes while some
 of the longer queries are made.
 
 If we restart haproxy it instantly pick up this slave but it never finds
 it with out a restart. ANyone else had this probelm or know what a fix is?
 
 config file is below
 
 TIA
 
 Rhys
 
 global
 log 127.0.0.1   local0
 maxconn 4096
 user haproxy
 group haproxy
 daemon
 
 defaults
 log global
 modetcp
 option  tcplog
 option  dontlognull
 retries 3
 option redispatch
 contimeout  5000
 clitimeout  5
 srvtimeout  5
 
 listen mysql-pdw
 bind 0.0.0.0:3306 http://0.0.0.0:3306
 option mysql-check user theusername
 hash-type consistent
 balance roundrobin
 server slave2 nameofslave2andid.rds.amazonaws.com:3306
 http://nameofslave2andid.rds.amazonaws.com:3306 weight 100 check inter
 5s observe layer4
 server slave1 nameofslave1andid.rds.amazonaws.com:3306
 http://nameofslave1andid.rds.amazonaws.com:3306 weight 10 check inter
 5s observe layer4
 server master nameofmasterandid.rds.amazonaws.com:3306
 http://nameofmasterandid.rds.amazonaws.com:3306 weight 1 check inter
 5s observe layer4
 
 
 




Re: Automate backend registration

2011-08-03 Thread Holger Just
Jens,

Many people have a script that builds a working configuration file from
various bits and pieces. As the actual needed configuration typically
isn't something which follows a common path but depends on the
environment and the actual applications and a thousand other bits, there
isn't a standard here.

But it really isn't hard to throw together a small shell/perl/python
whatever script which concatenates the final config file from various
pieces or uses some templating language of your chosen language.

An script we use is https://github.com/finnlabs/haproxy. It consists of
a python script which assembles the config file from a certain directory
structure. This script is then called before a start/reload of the
haproxy in the init script.

So basically, you need to create your script for generating your Haproxy
configuration, hook it into your init script and then, as a post-install
in your RPMs put the configuration in place for your
configuration-file-creating-script and reload Haproxy.

To enable/disable previously registered backend components, you might be
able to use the socket, but that usage is rather limited and mainly
intended for maintenance, not for actual configuration changes.

Hope that helps and sorry if that was a bit recursive :)
Holger

On 2011-08-03 22:52, Jens Bräuer wrote:
 Hi Baptiste,
 
 sorry for my wording. But you are right, with registration I mean
 - add ACL
 - add use_backend
 - add backend section
 so to sum it up make haproxy aware of a new application.
 
 There might be cases there I want to only add a server to existing backend, 
 but that would be the second/third step.
 The use-case is that I have HA-Proxy running and do a yum/apt-get install 
 and the RPM should come with everything to integrate with HA-Proxy. I am sure 
 that there must be some tool out there.. ;-)
 
 Cheers,
 Jens
 
 
 On 03.08.2011, at 20:24, Baptiste wrote:
 Hi Jens,

 What do you mean by registration?
 Is that make haproxy aware of the freshly deployed application  ?

 cheers

 On Wed, Aug 3, 2011 at 5:46 PM, Jens Bräuer jens.brae...@numberfour.eu 
 wrote:
 Hi HA-Proxy guys,

 I wonder whats the current state of the art to automate the registration of 
 backend. My setup runs in on EC2 and I run HA-Proxy in front of local 
 applications to easy administration. So a typical config file would be like 
 this.

 frontend http
bind *:8080
acl is-auth path_beg /auth
acl is-core path_beg /core
use_backend authif is-auth
use_backend coreif is-core

 backend auth
server auth-1 localhost:7778 check

 backend core
server core-1 localhost:1 check

 All applications are installed via RPMs and I would like couple the 
 installation with the backend registration. I like to do this as want to 
 configure everything in one place (the RPM) and the number of installed 
 applications may vary from host to host.

 I'd really appreciate hint where I can find tools or whats the current 
 state to handle this kind of task.

 Cheers,
 Jens



 
 




Re: Parsing httplog with java

2011-07-04 Thread Holger Just
Hi Damien,

On 2011-07-04 14:34, Damien Hardy wrote:
 Does anyone have ever done the regex to parse the haproxy apachelog.
 (we want to inject logs in hbase via flume :)

although it's not directly targeted for Java, but written in Python, but
I have already posted my approach of parsing the HAProxy HTTP logs to
this list some time ago. See
http://permalink.gmane.org/gmane.comp.web.haproxy/5320 for the
boilerplate script including the regex.

Hope that helps,
Holger



Re: HAProxy for PostgreSQL Failover

2011-06-22 Thread Holger Just
Alan,

On 2011-06-15 19:54, Alan Gutierrez wrote:
 I'd like to use HAProxy to implement a simple proxy that can perform
 failover for a pair of PostgreSQL configured as master/slave with
 PostgreSQL 9.0 streaming replication to replicate the master to the
 slave. Only the master is active for client connections, unless the
 master fails, then the clients should connect to the slave while an
 administrator recovers the master.

You might also want to have a look at pgpool-II [1] which is a proxy
specifically designed for failover, replication and loadbalancing of
Postgres servers. Recent versions can take advantage of the built-in
asynchronous replication feature of Postgres 9. Using this, you can

* configure failover and recovery
* potentially utilize the second machine for read-only queries.

--Holger

[1] http://pgpool.projects.postgresql.org



Re: Help on SSL termination and balance source

2011-06-09 Thread Holger Just
Habeeb,

given your Apache does actually insert/append an X-Forwarded-For header
you can use this statement instead of balance source in HAProxy:

balance hdr(X-Forwarded-For)

This has a few caveats you should be aware. Users can set the
X-Forwarded-Header themselves (which is done by some upstream proxies).
Most forwarders (HAProxy included) just append their IP to the list by
default. I don't know how Apache can be configured, but you should try
to delete and upstream X-Forwarded-For headers and just include the IP
of the last visible source to avoid users messing with the balancing.

Hope that helps,
Holger

On 09.06.2011 15:54, habeeb rahman wrote:
 James,
 
 Thanks for your points. Rewrite rule was set up by some other guys and
 is being used for some time now and works well with round robin.
 Anyhow I will look at mod_proxy in detail. Not sure how SSL termination
 can be done with it and moreover how haproxy gonna balance based on
 client IP. Any insight?
 
 Anyone else has any thoughts or insights to share?
 
 -Habeeb
 
 On Thu, Jun 9, 2011 at 7:11 PM, James Bardin jbar...@bu.edu
 mailto:jbar...@bu.edu wrote:
 
 On Thu, Jun 9, 2011 at 7:33 AM, habeeb rahman pk.h...@gmail.com
 mailto:pk.h...@gmail.com wrote:
 
  apache rewrite rule:
   RewriteRule ^/(.*)$ http://127.0.0.1:2443%{REQUEST_URI} [P,QSA,L]
 
 
 Why are you using a rewrite instead of mod_proxy?
 ProxyPass does some nice things by default, like adding the
 X-Forwarded-For header which will provide the address of the client.
 Otherwise, you will need to do this manually with rewrite rules.
 
 -jim
 
 




Re: Different URL on backend

2011-01-24 Thread Holger Just
Sorry for the impersonation. My virtual identity setup got a bit overly
excited and made an aweful mess in the whole room. Guess I need some
napkins now...

--Holger



Re: Downgrade backend request/response to HTTP/1.0

2010-05-04 Thread Holger Just
Hi Dave,

On 2010-05-04 18:55, Dave Pascoe wrote:
 Is there a way in haproxy 1.4 to perform the equivalent function that
 these Apache directives perform?
 
  SetEnv downgrade-1.0 1
  SetEnv force-response-1.0 1
 
 i.e., force haproxy to downgrade to HTTP/1.0 even though the client is
 HTTP/1.1

I'm not really sure what you are trying to achieve with this (as you
should really reconsider using software which does not understand HTTP
1.1 nowadays), but you could force the HTTP version using the following
statements:

# replace the HTTP version in the request
reqrep ^(.*)\ HTTP/[^\ ]+$ \1\ HTTP/1.0

# and also force HTTP 1.0 in the response
rsprep ^(.*)\ HTTP/[^\ ]+$ \1\ HTTP/1.0

Note that this does not change any headers, so the request and response
are technically still HTTP/1.1, only disguised as HTTP/1.0.

--Holger



Re: Hardware recommendations

2010-04-28 Thread Holger Just
On 2010-04-28 19:10, Alex Forrow wrote:
 We're looking to upgrade our HAProxy hardware soon. Does anyone have any
 recommendations on the things we should be looking for? e.g. Are there
 any NICs we should use/avoid?

Hi Alex,

I'm just writing down here what comes to my mind. Sorry if it looks a
bit unorganized...

Haproxy itself is not very demanding. A two core system will suffice.
Check to have enough RAM to hold all your sessions, but since it's
rather cheap to get 4 or 8 Gigs you should be safe here :)

Always think about the resource demands of the TCP stack. ON large
loadbalancer instances (esp. with many short connections), the TCP stack
will consume much more resources than your Haproxy.

Some NICs allow to offload some of the responsibilities like calculation
of packet checksums to silicon. Interrupt mitigation is something you
most probably want to have. Normaly, each packet will trigger an
interrupt which will eat away all your ressources if there are many of
them. Some NICs allow to cap the number of interrupts per second which
might increase latency a bit but saves your load balancer from dying :)

Make really sure your intended NIC is very well supported by your
intended OS. Many show suprising behaviour under stress. So the best
advice would possibly be to have a look in the vendors hardware support
lists and ask in the respective channels.

You most probably want to stay away from most on-board NICs from vendors
like Broadcom or SiS. Dedicated PCIe NICs from Intel are normally safe
(you find them also on some server boards from e.g. Supermicro). But
make sure to check the individual capabilities.

As a loadbalancer is always IO bound, check your data paths. Most
interestingly is the speed from and to the NIC (in a way that the
network-line is always the bottleneck) and between memory (ECC of
course) and CPU. Harddisks are obviously uninteresting :)

Hope this helps,
--Holger



Re: error page problem

2010-04-13 Thread Holger Just
Hi Mikołaj,

On 2010-04-13 12:47, Mikołaj Radzewicz wrote:
 I was trying to configure custom error pages on haproxy but after
 waisting a lot of time I'm not successful. I wanted to serve it all
 the time as my backends give it to the clients.

if I understand you correct you want to check if one of your backends
returned a HTTP 500 and replace its response to the client with the
errorfile in haproxy.

This is actually not possible. The errorfile and errorloc parameters
only apply for error generated by Haproxy itself. So the file specified
in errorfile 500 ... is only served if Haproxy itself had an internal
error. You have to fix your application error pages instead.

Also as Thomas noted, the configured errorfiles are in HTTP format. Thus
at least you would have to add the appropriate HTTP headers. Be sure to
use the correct linebreaks (\n\r)

--Holger



Re: Changing HA Proxy return codes

2010-04-07 Thread Holger Just
Hi Matt,

On 2010-04-07 14:34, Matt wrote:
 If I wanted to change the error return code submitted by haproxy (not
 the backend server) is this possible? i.e. change haproxy to return a
 502 when it's going to return a 504?

You could (ab)use the errorfile parameter and have haproxy send
arbitrary data. Thus you could do something like this:

errorfile 500 /etc/haproxy/errorfiles/503.http
errorfile 502 /etc/haproxy/errorfiles/503.http
errorfile 503 /etc/haproxy/errorfiles/503.http
errorfile 504 /etc/haproxy/errorfiles/503.http

and then have the file at /etc/haproxy/errorfiles/503.http contain
something like this:

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Content-Length: 329

!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Strict//EN
http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;
html xmlns=http://www.w3.org/1999/xhtml;
head
  meta http-equiv=Content-Type content=text/html; charset=utf-8 /
  titleSomething is wrong/title
/head
bodyh1Something went wrong/h1/body
/html

Note that you should correctly re-calculate the Content-Length header
(or leave it out) if you do any changes here.

--Holger



Re: queued health checks?

2010-03-20 Thread Holger Just
Hi Greg,

On 2010-03-20 6:52 AM, Greg Gard wrote:
 i remember somewhere in the archives mention of a plan to make health
 checks get queued like any other request. did that happen in 1.4.x
 branch with all the work to health checks. i searched the archives,
 but didn't turn up what i remembered. my use case is rails/mongrel
 with maxconn = 1 so i don't want health checks getting sent to a
 mongrel that might be serving a request or more critically having a
 request puke because haproxy sent a health check to the same server it
 just sent a client request to.

The haproxy - mongrel topic was discussed several times in the past.

I think, your health checks should not be a problem. If you know and
configure your haproxy well. Yes, mongrels are only able to process one
request at a time but they are also able to queue requests themselves
without puking on the second. This results in having the health checks
queued with the regular requests on the mongrels.

IMHO that should be no problem if you configure your check timeouts
accordingly such that they can handle a rather long regular request
queued in front of he health check (depending on your actual setup).

I think, you should always check all your mongrels as they have the
tendency to just die if something went wrong. If you just check one, it
might not detect some failed mongrels.

For health checks of rails apps we use a simple dedicated controller
inside the rails app (that depends on all the initializers) which just
performs a SELECT true; from the database. This works really well for
us, although we do not use mongrels anymore but glassfish+jruby. As this
check is rather fast, it should not lead to mayor issues even on mongrels.

--Holger



Re: [ANNOUNCE] haproxy-1.4.0

2010-03-02 Thread Holger Just
Hi Willy,

On 2010-03-02 23:43, Willy Tarreau wrote:
 I could get the same errors on my ultra5 under solaris 8
 which correctly builds 1.3. I finally tracked that down to
 the #define XOPEN_SOURCE 500 in auth.c. If I remove it,
 everything builds as before.

just for the archives: 1.3 also compiles fine at my Opensolaris box. So
if I can help in solving the 1.4 issues I will happily try to do so.

Anyways: Thank you and all the other guys for the support and this great
piece of software :)

--Holger



Re: [ANNOUNCE] haproxy-1.4.0

2010-02-28 Thread Holger Just
Hi Willy,

On 2010-02-28 07:29, Willy Tarreau wrote:
 Could you please try to add the two following lines at the top of the
 3 faulty files (types/session.h, types/proxy.h, types/protocols.h) :
 
 #include sys/types.h
 #include sys/socket.h
 
 I think it should fix the build.

Thanks for your help. Unfortunately, it did not work. The errors are
exactly the same.

I noticed though, that the definition of sockaddr_storage in
/usr/include/sys/socket_impl.h looks like this:


#if !defined(_XPG4_2) || defined(_XPG6) || defined(__EXTENSIONS__)
[...]
struct sockaddr_storage {
sa_family_t ss_family;  /* Address family */
/* Following fields are implementation specific */
char_ss_pad1[_SS_PAD1SIZE];
sockaddr_maxalign_t _ss_align;
char_ss_pad2[_SS_PAD2SIZE];
};
#endif


So if I add SILENT_DEFINE=-D__EXTENSIONS__=1 to the make call gcc does
not complain anymore (even without adding the additional includes). I'm
not currently able to decide if the resulting executable is completely
valid or if this is a valid approach even.

-Holger



Re: [ANNOUNCE] haproxy-1.4.0

2010-02-27 Thread Holger Just
Hi all,

On 2010-02-26 16:02, Willy Tarreau wrote:
 I'm obviously interested in any problem report :-)

I'm trying to compile Haproxy 1.4 on Opensolaris Build 133 (i386 on a
Core i7). This however fails.

make TARGET=solaris CPU=i686 USE_STATIC_PCRE=1
SMALL_OPTS=-I/usr/include/pcre
[...]
gcc -Iinclude -Iebtree -Wall  -O2 -march=i686 -g -fomit-frame-pointer
-DFD_SETSIZE=65536 -D_REENTRANT -I/usr/include/pcre -DTPROXY
-DENABLE_POLL -DUSE_PCRE -I/usr/include
-DCONFIG_HAPROXY_VERSION=\1.4.0\ -DCONFIG_HAPROXY_DATE=\2010/02/26\
-c -o src/auth.o src/auth.c
In file included from include/types/fd.h:32,
 from include/proto/fd.h:31,
 from include/common/standard.h:32,
 from include/common/compat.h:29,
 from include/types/acl.h:25,
 from include/proto/acl.h:26,
 from src/auth.c:22:
include/types/protocols.h:87: error: field `addr' has incomplete type
In file included from include/types/queue.h:29,
 from include/types/server.h:37,
 from include/types/lb_map.h:26,
 from include/types/backend.h:29,
 from include/types/proxy.h:40,
 from include/types/acl.h:30,
 from include/proto/acl.h:26,
 from src/auth.c:22:
include/types/session.h:170: error: field `cli_addr' has incomplete type
include/types/session.h:171: error: field `frt_addr' has incomplete type
In file included from include/types/acl.h:30,
 from include/proto/acl.h:26,
 from src/auth.c:22:
include/types/proxy.h:154: error: field `src' has incomplete type
make: *** [src/auth.o] Error 1

I'm at an end here as my knowledge of C is rather sparse as is my
(Open)solaris experience.

--Holger



Re: URL rewrite question

2010-02-06 Thread Holger Just
On 2010-02-06 10:55, Willy Tarreau wrote:
  reqrep ([^\ ]*)\ /action.register\?([^]*)*param2=bar(.*)  \1\ 
 /newaction\?\2param2=bar\3

This does it. Looks like your Regex Kung Fu is stronger than mine. But
well, it was late :)

--Holger



Re: URL rewrite question

2010-02-04 Thread Holger Just
On 2010-02-04 21:15, Sriram Chavali wrote:
 I am trying to rewrite URLs using haproxy's reqirep directive. The url that I 
 am trying to rewrite is of the pattern
 /action/register?param1=fooparam2=barparam3=baz
 
 The URL that I want to be rewritten is 
 /newaction?param1=fooparam2=barparam3=baz on the condition that whenever 
 param2=bar
 
 The ordering of the query string parameters can be random, i.e param2 could 
 be the 1st parameter or the last one. 

In Haproxy 1.3.x it is currently not possible to issue a reqirep on a
conditional basis. This is however introduced in Haproxy 1.4-rc1. There
you could solve your problem using something like this:

acl baz url_sub param2=bar ?param2=bar
reqirep ([^\ ]*)\ /action/register\?(.*)  \1\ /newaction\?\2  if baz

On 1.3.x you could however use something like the following which
performs a content switch based on the acl and performs the reqirep
later in the backend. You might want to look into the track keyword for
the servers.

frontend foo
  bind :80
  mode http

  acl baz url_sub param2=bar ?param2=bar
  use_backend baz if baz

backend baz
  reqirep ([^\ ]*)\ /action/register\?(.*)  \1\ /newaction\?\2
  server foo 192.168.1.1:80

Yes, this is uggly but as far as I know it is the only possibility by
now. (And yes, I see forward to remove many such backends in my
installations too)

--Holger



Re: Manipulate packet payload

2010-02-02 Thread Holger Just
Hi

On 2010-02-02 16:19, Anthony D wrote:
 I understand that HAproxy can do L7 header manipulation, however I read
 in the manual that it doesn't touch the data contents. Are there any
 plans for adding this option?

I can not speak for Willy, but as content manipulation (and also some
kinds of header manipulation) is very expensive in terms of CPU, memory
and latency I think this is not going to happen anytime soon or at all.

Most people would agree that content rewriting is out of scope for a
load balancer at all.

 If there isn't, does anyone have any open-source suggestions? I'm aiming
 to modify the response part of the data if that helps any.

You might want to have a look at Privoxy (http://www.privoxy.org/) or
Proxomitron for Windows (http://www.proxomitron.info/ or
http://www.buerschgens.de/Prox/index.html (in German))

--Holger



Re: Mode tcp and ACL's - missing something obvious?

2010-01-27 Thread Holger Just
Hi Harvey

On 2010-01-28 00:42, Harvey Yau wrote:
 I've been trying to use ACLs to block or choose a backend based on
 source IP address.  It works perfectly in mode HTTP, but fails miserably
 in mode TCP.  Is there something obvious that I'm missing or is this a bug?
 
 mode tcp
 acl myips src 149.28.0.0/16
 block if myips

The block keyword work on level 7 only. You could however try somthing
like this:

acl myips src 149.28.0.0/16
tcp-request content reject if myips

For more examples see the documentation for tcp-request at

  http://haproxy.1wt.eu/download/1.3/doc/configuration.txt

Regards,
Holger



Re: Does anyone have an init.d script for Debian?

2010-01-10 Thread Holger Just
Hi Craig,

(sorry, for double posting, missed the correct button...)

On 10.01.10 11:01, Craig Carl wrote:
 Does anyone know where I can find a /etc/init.d/haproxy script for
 Debian?

The simplest approach would probably be to use the one shipped with the
official Haproxy package for Debian. To get it, just go to

http://packages.debian.org/squeeze/amd64/haproxy/download

and download the package from a location near you.

Now decompress the the package using something like

ar -t haproxy_1.3.22-1_amd64.deb

You will notice, it will create two files: data.tar.gz and
control.tar.gz. The data.tar.gz file contains the directory structure
which would be created on package installation. So if you examine it,
you will find your init script in etc/init.d/haproxy.

The other alternative would be to just use the haproxy package from
Dedian without compiling it yourself. They are indeed compiled using
USE_PCRE=1

--Holger



Re: haproxy administration web interface

2009-12-07 Thread Holger Just
Hi,

On 07.12.09 20:49, Israel Garcia wrote:
 Hi,
 A simple question, is there any web interface to administer haproxy via web?

A simple answer: Nope, at least no free one I have heard of. Maybe you
could find something from loadbalancer.org

However, I am currently looking into developing a simple twisted and/or
django based REST-webservice to manage some aspects of Haproxy.

Currently, I am planning the following features:

* Create and edit a complete configuration by using something like
  haproxy-config (http://github.com/finnlabs/haproxy)
* Add, edit and remove complete sections
* Allow member servers of backends and listeners to be added and
  removed

* Use the stats-socket to interface directly with Haproxy
* Set the weight of individual backend servers (for Haproxy 1.4)
* Provide a (readonly) webservice API to the various Haproxy stats

Optionally: Provide a callback interface to perform certain used defined
actions based on state changes of ressources by providing callbacks to
which user code can register itself. This interface could be called from
something like syslog-ng in nearly realtime.

I plan on hacking on it during the evenings / nights of the upcomming
26c3. So if you have any ideas, feel free to provide them here.

--Holger



Re: Session stickiness over HTTP and HTTPS

2009-12-07 Thread Holger Just
On 07.12.09 23:19, Anthony Urso wrote:
 Hi:
 
 I am looking for advice on the best way to load-balance HTTP and HTTPS
 traffic such that once a session is established with either protocol,
 haproxy continues to send new requests from that session to the same
 web server.
 
 Is this a common use case?

This indeed pretty common (although, I tend to avoid this for the sake
of simplicity using cookie-based sessions et al.)

However, as HTTP is a stateless protocol by definition, which does not
inherently have the concept of a session, you have to decide for
yourself (or your app) what exactly a session makes.

Using this info you can then tell Haproxy how to match a specific
stateless request from a client and send it to the correct server which
then holds its session data.

For some  well-documented examples see the architecture guide. [1]
Additionally, it is always a good idea to put the configuration manual
[2] under your pillow at night ;)

 I see that section 3.1 in the configuration guide discusses using
 stunnel for this, but it's not clear whether haproxy will choose the
 sticky server based on stunnel's X-Forwarded-For header or it will
 choose the destination by the stunnel machine's address?

As stated above, this is up to you. In this case I think, it makes only
sense to have it use the X-Forwarded-For header of stunnel. You can
configure both.

--Holger

[1] http://haproxy.1wt.eu/download/1.3/doc/architecture.txt
[2] http://haproxy.1wt.eu/download/1.3/doc/configuration.txt



Re: ACLs + header manipulation

2009-07-14 Thread Holger Just
On 14.07.2009 18:12 Uhr, Jeremy wrote:
 Is it possible to use 'reqirep' to i.e. rewrite a Host header, only if a
 certain ACL matches? As far as I can tell it doesn't look like you can
 combine ACL's with the req* header manipulation commands but I just
 wanted to double check.

Jeremy,

Unfortunately, this is currently not possible as the header manipulation
methods do not accept any ACLs. You could however first route your
request to a special backend using use_backend (and appropriate ACLs).
The header manipulation could then be done in the context of this
backend exclusively.

While is might become very ugly in large scale, it can be used as a
isolated shortcut until Willy (or some other skilled C developer :)
decides to implement ACLs on header manipulation methods.

I - for myself - would appeciate such an extension.

--Holger



Re: haproxy include config

2009-07-09 Thread Holger Just
On 09.07.2009 7:15 Uhr, Willy Tarreau wrote:
 As I said in earlier mail, I have implemented the multiple file loading
 in 1.4-dev :
 
   
 http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=5d01a63b7862235fdd3119cb29d5a0cfd04edb91
 
 If many people are interested, I know it will be quite easy to backport it
 to 1.3, and I can merge it into 1.3.19 once I have a few other things to
 put with it.

Willy!

Would it be difficult to extent your patch to load all files
(potentially named *.cfg or something like that) in a directory? You
could check each argument if it's a file or a directory and include its
containing files if its the latter.

I think that would be more flexible and allow for a more natural
configuration. You could for example create a self configuring
loadbalancer by just dropping a file with a server definition to a
backend directory and issuing a reload. Judging from the comments
above that seems to be desirable to at least some of us :)

Unfortunately, I'm not that fluent in C so I can not provide a proper
patch but perhaps someone else can?

--Holger



Re: Duplicate checks in backup servers?

2009-06-26 Thread Holger Just
Pedro Mata-Mouros Fonseca wrote:
 This is my first post into this mailing list, been following it for a
 few days. So, greetings from Portugal. I have a small doubt: I have a
 few backend sections defined in my haproxy.conf, one of each is composed
 of server1 to 4 - and all of them using the check keyword. In another
 backend I use those servers as backups to the main one, and my question
 is: do I have to explicitely specify check on these backups in order for
 this later backend to be aware of their health? Will this not double the
 healthchecks for them?

Pedro,

You can use the track keyword on backend servers to track the health
status of other servers. You can use it like this:

backend primary
server www1 192.168.1.1:80 check port 80 inter 5s
server www2 192.168.1.2:80 check port 80 inter 5s

backend secondary
server www1_secondary 192.168.1.1:80 track primary/www1
server www2_secondary 192.168.1.2:80 track primary/www2

Health checks will be performed only once every 5 seconds for each of
the servers.

--Holger



Re: Help needed

2009-06-17 Thread Holger Just
On 17.06.2009 19:59 Uhr, Karthik Pattabhiraman wrote:
 We use HAProxy 1.3.17 for our setup. We faced an issue where the
 requests were redirected to a wrong cluster. We are still not able to
 figure out why this happened and would really appreciate any help.
 
 Please find attached a sample configuration file. In this case when
 requests were coming to m.nbcsports.com they were redirected to
 ad_cluster and we are at out wits end trying to figure what is wrong.

Karthik,

I assume not every request is routed the wrong way but seemingly
arbitrary requests. Further I assume that requests to nbc_cluster and
ad_cluster happen on the same page (i.e. a browser hits both clusters in
a single page view)

If this is the case you might try to include the option httpclose
directive into your frontend or your default section. As you just
included it into into the nbc_cluster backend and not the other ones I
think this will cause trouble as it imposes a race condition on which
requests will be served first in a connection.

Please also read the documentation for option httpclose (and possibly
option forceclose) again...

--Holger



Re: Performance issue with images.

2009-06-16 Thread Holger Just
On 17.06.2009 1:29 Uhr, Yves Accad wrote:
 Please let me know any
 detail I need to provide you to help troubleshooting the issue.

Yves,

Unfortunately your descriptions are rather vague and my crystal ball is
still getting fixed by the mechanics. Sorry...

To help you in the meantime, would you mind providing your configuration
file so we could try to see where it broke?

--Holger



Re: haproxy include config

2009-06-15 Thread Holger Just
On 15.06.2009 6:36 Uhr, Timh Bergström wrote:
 Hello Holger,
 
 If nothing else, I would be interested in this script.
 
 Cheers,
 Timh

So, after checking with my chief about opensourcing our stuff I can
finally conclude: Yes we can! :)

You can find the script at http://github.com/finnlabs/haproxy/tree/master

--Holger



Re: Stripping some response headers

2009-06-15 Thread Holger Just
On 15.06.2009 19:24 Uhr, Karl Pietri wrote:
 Due to some strange things we are doing with our logging we have a bunch
 of info in the response headers that would be nice to strip out before
 sending to the client.  Is this possible in haproxy?
 
 Essentially we are logging things like the user_id if logged in by using
 a custom header X-SS-Userid and capturing that.  but obviously it isn't
 necessary for the client to get it so would be nice to strip it out.
 
 -Karl

Hi Karl!

This sounds like a job for the rspdel / rspidel configuration directive.
For more information have a look at the configuration ducumentation at

http://haproxy.1wt.eu/download/1.3/doc/configuration.txt

--Holger



Re: haproxy include config

2009-06-12 Thread Holger Just
On 12.06.2009 23:08 Uhr, Joe Williams wrote:
 I looked through the docs but didn't see anything. Is it possible to
 include config files from the main config file? So you could do
 something similar to a vhosts.d directory in Apache or Nginx.

Hello Joe!

Unfortunately, this is not supported by HAProxy itself right now.

However, I wrote a small script which is run as part of my
/etc/init.d/haproxy (start|reload|restart). This script compiles config
fragments found in a fixed directory structure into a complete
/etc/haproxy/haproxy.cfg file.

The script expects the following directory structure. If your are
interested, I can provide it here.

/etc/haproxy
   defaults
  00-base
  10-errorfiles

   frontends
  my-first-frontend
 00-ports
 10-acls
 20-backend1
 21-backend2
  my-second-frontend
 ...

   listen
  sect1
 00-base
 10-backend1
  sect2
 ...

   backends
  my-first-backend
 00-base
 10-server1
 11-server2
  my-second-backend
 ...

--Holger