Deny http connection

2011-11-25 Thread Sander Klein

Hi,

I was wondering if it is possible to start rate-limiting or deny a 
connection based on response codes from the backend.


For instance, I would like to start rejecting or rate limit a HTTP 
connection when a client triggers more than 20 HTTP 500's within a 
certain time frame.


It this possible?

Greets,

Sander



Re: haproxy and interaction with VRRP

2011-12-12 Thread Sander Klein

On 12.12.2011 10:28, Guillaume Castagnino wrote:

Le lundi 12 décembre 2011 10:18:33, Vincent Bernat a écrit :

Hi!

When haproxy is bound to an IP address managed by VRRP, this IP 
address
may be absent when haproxy starts. What is the best way to handle 
this?


  1. Start haproxy only when the host is master.
  2. Use transparent mode.
  3. Patch haproxy to use IP_FREEBIND option.


What about a 4:
- Add net.ipv4.ip_nonlocal_bind=1 to your sysctl.conf settings. No 
need to

patch anything


I use a 5:

I bind the vrrp addresses to a dummy interface. For example:
ip addr add 192.168.1.1/32 dev dummy0
ip addr add 2001:dead:beef::1/128 dev dummy0

Keepalived has this nice static address option for it.

I started doing this because there is no nonlocal_bind option for IPv6 
(or I didn't search well enough (-: )


Greets,

Sander



Re: haproxy and interaction with VRRP

2011-12-12 Thread Sander Klein

On 12.12.2011 13:10, Vincent Bernat wrote:

On Mon, 12 Dec 2011 13:04:22 +0100, Sander Klein wrote:


I started doing this because there is no nonlocal_bind option for
IPv6 (or I didn't search well enough (-: )


From the source code, it seems that IPv4 non local bind sysctl also
applies to IPv6. Since 2.6.30.


Hmmm, then I'm going to look into it again. I'm running 2.6.39 and 
it

doesn't seem to work. Could be a problem on my side.


You are right. It only applies to v4-mapped addresses.


It would have been nice if it did work though... It's one of those 
features I'm missing.


Binding ip's to the dummy interface works, but it always feels a bit 
hacky and brings up a lot of administration if you have lots of vrrp 
addresses.


Greets,

Sander



Re: haproxy and interaction with VRRP

2011-12-13 Thread Sander Klein

On 12.12.2011 14:32, Vincent Bernat wrote:

On Mon, 12 Dec 2011 13:23:11 +0100, Sander Klein wrote:

I started doing this because there is no nonlocal_bind option 
for

IPv6 (or I didn't search well enough (-: )


From the source code, it seems that IPv4 non local bind sysctl 
also

applies to IPv6. Since 2.6.30.


Hmmm, then I'm going to look into it again. I'm running 2.6.39 and 
it

doesn't seem to work. Could be a problem on my side.


You are right. It only applies to v4-mapped addresses.


It would have been nice if it did work though... It's one of those
features I'm missing.

Binding ip's to the dummy interface works, but it always feels a bit
hacky and brings up a lot of administration if you have lots of vrrp
addresses.


Here is a patch for this (only slightly tested):

http://marc.info/?l=linux-netdevm=132369656811468w=2

It is targeted at the net-next branch and will not apply cleanly on a
vanilla kernel: you just need to remove the check on inet-freebind
which is not yet present in vanilla kernels.


Thanks! I'll have a look if I can get it working.



Possible bug in 1.5-dev7

2012-01-18 Thread Sander Klein

Hi,

I'm observing some strange behavior with slowstart and the track 
option.


When taking out web1 for maintenance and putting it back online the 
weight of cluster1/web1 returns to 100 in 5 minutes but cluster2/web1 
keeps stuk at 7.


Is this expected behavior?

I have the following config:

listen cluster1
 bind x.x.x.x:80
 mode http
 balance roundrobin

 option abortonclose
 option httpchk GET /check.php HTTP/1.0
 option http-server-close
 option redispatch

 server web1 y.y.y.1:80 cookie web1 weight 100 minconn 40 maxconn 70 
check inter 2000 rise 3 fall 2 slowstart 5m
 server web2 y.y.y.2:80 cookie web2 weight 100 minconn 40 maxconn 70 
check inter 2000 rise 3 fall 2 slowstart 5m


backend cluster2
 bind x.x.x.x:443
 mode tcp
 option abortonclose
 option redispatch

 server web1 y.y.y.1:443 weight 100 minconn 2 maxconn 5 track 
cluster1/web1 slowstart 5m
 server web2 y.y.y.2:443 weight 100 minconn 2 maxconn 5 track 
cluster1/web2 slowstart 5m


Greets,

Sander



Re: Possible bug in 1.5-dev7

2012-01-19 Thread Sander Klein

On 18.01.2012 11:08, Sander Klein wrote:

Hi,

I'm observing some strange behavior with slowstart and the track 
option.


When taking out web1 for maintenance and putting it back online the
weight of cluster1/web1 returns to 100 in 5 minutes but cluster2/web1
keeps stuk at 7.

Is this expected behavior?


Replying to myself...

Never mind, I see this has been fixed in 1.5-dev7-2001

Greets,

Sander



Re: Geotargeting and Server DOWN problem

2012-01-26 Thread Sander Klein

Hi,

On 26.01.2012 18:45, Sebastian Fohler wrote:

I'm trying to setup a loadbalancing configuration with four backend
server on nginx basis.
The first problem I had was, while checking the haproxy stats, that
they show every backendserver is at least the same time DOWN as it is
UP, how can this be, and what could be the problem?


Are you doing active check against the backend servers using haproxy?


Another problem I have is that the backend servers are using
geotargeting to deliver specific content to specific country
locations, since the haproxy loadbalancing has always the same ip the
seem to be some confusion with the geotargeting after activating
haproxy.


You might use real ip (http://wiki.nginx.org/HttpRealIpModule) and the 
haproxy 'option forwardfor' to solve the geotargetting problem.


Greets,

Sander



Re: Geotargeting and Server DOWN problem

2012-01-27 Thread Sander Klein

On 27.01.2012 16:01, Sebastian Fohler wrote:

Sorry just found out that I definitly do an active check.
But for some reason every second refresh of my stats shows the 
servers down.

Any idea why that could be?
The servers are definitly up all that time.


Hmz, I don't know. It think it's helpful if you post more info like 
your haproxy config.


Greets,

Sander



TIME_WAIT tuning

2012-01-27 Thread Sander Klein

Hi,

while benchmarking my new web-server cluster I quickly hit the limit of 
32.768 sockets in TIME_WAIT state.


I've been looking around on the internet but I'm a bit confused if this 
limit can be tuned somehow or if it's an hard limit. I read about the 
tcp_fin_timeout and tcp_tw_reuse/recycle options but I don't think they 
will be of any use, since I hit the limit within a couple of seconds.


Can anyone give me a push in the right direction or even better, a 
detailed explanation? ;-)


I was also wondering if this limit is system wide or per IP. I have 
multiple VIP's on my loadbalancer.


Greets,

Sander



Re: TIME_WAIT tuning

2012-01-29 Thread Sander Klein

Oh dear...

I did some more testing and it's not a problem with TIME_WAIT. It was a 
firewall in between.


During my last test I easily had 60.000 connections in TIME_WAIT state.

Greets,

Sander

On 27.01.2012 21:52, Sander Klein wrote:

Hi,

while benchmarking my new web-server cluster I quickly hit the limit
of 32.768 sockets in TIME_WAIT state.

I've been looking around on the internet but I'm a bit confused if
this limit can be tuned somehow or if it's an hard limit. I read 
about
the tcp_fin_timeout and tcp_tw_reuse/recycle options but I don't 
think

they will be of any use, since I hit the limit within a couple of
seconds.

Can anyone give me a push in the right direction or even better, a
detailed explanation? ;-)

I was also wondering if this limit is system wide or per IP. I have
multiple VIP's on my loadbalancer.

Greets,

Sander





Re: TIME_WAIT tuning

2012-01-29 Thread Sander Klein

Hi Willy,

Thank you for your answer.

During my search on the internet I found a lot of articles about 
TIME_WAIT stuff and a limit of 32.768. Since I had around that many 
sockets in TIME_WAIT I assumed this would be my problem.


I did enable tcp_tw_reuse, but I'm not sure if it will work because I'm 
doing my benchmarks using IPv6. Since the setting is in 
/proc/sys/net/ipv4 I assume it is for IPv4 only. But, then again, I 
could totally be wrong about that :-)


Greets,

Sander



Log 400 bad request

2012-02-10 Thread Sander Klein

Hi All,

I'm having a small problem with non RFC2616 requests. I would like to 
log them, but haproxy only logs:


cluster1-in cluster1-in/NOSRV -1/-1/-1/-1/0 400 1951 - - PR-- 
235/235/0/0/0 0/0 {|||} {} BADREQ


Is there a way to log them with the full host header and URL?

I know I can show them with 'echo show errors | socat 
unix-connect:/var/run/haproxy.stat stdio' but since we don't know when 
and where the problems happen we would like to log it to a file.


Greets,

Sander



Re: Log 400 bad request

2012-02-13 Thread Sander Klein

Hi Willy,

On 13.02.2012 08:07, Willy Tarreau wrote:

You won't have it in the log because the request failed to completely
parse. Maybe we could improve a bit the error path to be able to 
report

the request URI when only headers fail, that would help.


In my case that won't help. I need to find the bad URI since there are 
not encoded UTF-8 characters in them.



I know I can show them with 'echo show errors | socat
unix-connect:/var/run/haproxy.stat stdio' but since we don't know 
when

and where the problems happen we would like to log it to a file.


Some people are doing this using scripts which regularly poll. That
should not be an issue since you should not have that many errors. 
There
is an event ID in this output so that your script knows if the error 
is

an old one or a new one. For instance here :


Well, I'll write a script to poll every once in a while. Then I can 
practice my perl skills again ;-)


Regards,

Sander



Crash with ss-20120310 and ss-20120311

2012-03-12 Thread Sander Klein

Hi,

today I've experienced 3 crashes on 2 servers with haproxy. I've never 
had any before so I thought I would just put a note up here.


20120310 crashed with:
Server 1
haproxy[3065] general protection ip:452ddf sp:7fff02906808 error:0 in 
haproxy[40+6e000]


Server 2
haproxy[30329]: segfault at a22312e314e ip 00452db4 sp 
7fff0553dbb8 error 4 in haproxy[40+6e000]


20120311 crashed with:
Server 1
haproxy[30497]: segfault at a223156 ip 00452def sp 
7fff343786c8 error 4 in haproxy[40+6e000]


I know they are snapshots, but since none of the snapshot ever crashed 
on me before I thought it was worth noticing. Maybe it's already a known 
issue or it might be a new bug


Greets,

Sander



Re: Crash with ss-20120310 and ss-20120311

2012-03-15 Thread Sander Klein

Hey Willy,

On 15.03.2012 07:53, Willy Tarreau wrote:

Hi,

On Tue, Mar 13, 2012 at 07:05:36PM +0100, Baptiste wrote:

Hey,

I guess Willy would be keen to get the core dump and the haproxy
binary with its configuration.
You should try to reach him directly.


Yes Sander, please can you send me a core if you're still willing
to run it ? For this, you need to set ulimit -c unlimited before
starting haproxy and to disable user, group and chroot settings
in the global section. I really understand it can be too much for you
depending on your usage.


Of course we are using it in production ;-) But I do have a fail-over 
situation so I think I dare running it again. It can take some time 
before crashing.



If you could send me your config in private (without sensible info
such as stats password), it would immensely help.


Will do.

Do you care which snapshot I run?

Greets,

Sander



Re: Crash with ss-20120310 and ss-20120311

2012-03-15 Thread Sander Klein

On 15.03.2012 10:10, Willy Tarreau wrote:


Do you care which snapshot I run?


Ideally the first one which exhibited the issue. BTW, do you know 
which
most recent one you used without the issue ? Eg: do you know if 
20120306

has the same issue ?


I'm currently running 20120207 which doesn't give me any problems. I 
upgraded straight to 20120310.


Sander



Re: haproxy with keepalived

2012-03-20 Thread Sander Klein

Hey Esteban,

Your config looks good to me.

Sometimes it can happen that during failover not all servers receive 
the gratuitous arp and they keep sending traffic to the backup router.


I normally force another failover to force another gratuitous arp get 
it working again. It shouldn't happen to often tho


Greets,

Sander




Re: haproxy 1.5dev7 server check failed with IPv6

2012-03-29 Thread Sander Klein

Hi,

On 29.03.2012 16:44, Delta Yeh wrote:

Hi,
   It seems haproxy failed to do server check with IPv6.
  The top is like:
   browser---haproxy-www server

I did the following tests:
 1. IPv4 http server with server check, it works
 2. IPv6 http server with server check,  I get http 503. After 
disable

server check, I get http 200.
 3. IPv4 and IPv6 server with server check, only see IPv4 check 
packet.

  I get 503 when access WWW with IPv6. I get 200 when access WWW
server with IPv4.


Are you sure it's not an config error on the webserver side? I've been 
running dev7 for quite some time and do a lot of IPv6 checks. Never had 
any problems with it.


Greets,

Sander



Re: haproxy: *** glibc detected *** /usr/sbin/haproxy: double free or corruption (out): 0x0000000001ef41a0 ***

2012-05-22 Thread Sander Klein

Hmmm, I thought I typed more text...

On 22.05.2012 11:06, Sander Klein wrote:

Hi,

When I reload haproxy I get this message:

May 22 11:02:45 lb01-a haproxy: *** glibc detected ***
/usr/sbin/haproxy: double free or corruption (out): 
0x01ef41a0

***

I'm running haproxy 1.5-dev10 2012/05/13

If any more info is needed please let me know.


I was wondering if this message is a problem or some bug.

Regards,

Sander



Re: haproxy: *** glibc detected *** /usr/sbin/haproxy: double free or corruption (out): 0x0000000001ef41a0 ***

2012-05-31 Thread Sander Klein

Hi,


Hi,

When I reload haproxy I get this message:

May 22 11:02:45 lb01-a haproxy: *** glibc detected ***
/usr/sbin/haproxy: double free or corruption (out): 
0x01ef41a0

***

I'm running haproxy 1.5-dev10 2012/05/13

If any more info is needed please let me know.


I was wondering if this message is a problem or some bug.


I've also tested haproxy-1.5-ss-20120531 and it also gives the 
following error on reload:


May 31 08:52:42 lb01-b haproxy: *** glibc detected *** 
/usr/sbin/haproxy: double free or corruption (out): 0x01ece190 
***


Can this safely be ignored?

Greets,

Sander Klein



Re: haproxy: *** glibc detected *** /usr/sbin/haproxy: double free or corruption (out): 0x0000000001ef41a0 ***

2012-06-01 Thread Sander Klein

Hey Willy,

On 01.06.2012 01:03, Willy Tarreau wrote:

Sander,

first, thank you very much for your configuration, I could reproduce 
the
issue here. It's not 100% reproducible due to address randomization, 
but

common enough to get the issue.

The issue comes from the use of user-lists which are implied by stats 
auth.
User lists are resolved during parsing. They start with a string 
holding
the name of the user list, and are later resolved to point to the 
userlist

itself.

The issue is that during exit, we try to free everything before 
leaving
(well, it's mostly to save valgrind from shouting at those who use 
it). And
when ACL params are freed, the userlist names are freed too. But when 
the
pointer has already been resolved, it points to a userlist and not 
its name,

and this list has already been resolved.

I've tried doing an ugly hack to confirm I can work on a solution. 
Right now
I don't want to release it, it's too hackish. I prefer that you keep 
seeing
the glibc's error message for now on, until I get a real and solid 
fix.


I will recontact you as soon as I have something clean.

Thank you again for the report and for providing the useful 
configuration.


Willy


Thank you for your answer. I'll ignore it then.

Greets,

Sander



Response headers max size

2012-06-21 Thread Sander Klein

Hi List,

We are using HAProxy 1.5-dev11 and have a small issue with it.

Some of our coders use php firebug when they are debugging code. php 
firebug puts a lot of stuff in the response headers (X-WF-* headers) 
But, it looks like HAProxy blocks responses when the headers are larger 
than 8KB. Is there a way to make HAProxy accept larger response headers?


Regards,

Sander



Re: Haproxy and UTF8-encoded chars

2012-07-25 Thread Sander Klein

Hi,

On 25.07.2012 08:22, Stojan Rancic (Iprom) wrote:

Hello,

we're experiencing issues with HAproxy 1.5-dev11 rejecting GET
requests with UTF8-encoded characters. The encoding happens with
Javascript's Encode function for east european characters (š, č, ž,
etc) .


We are experiencing the same issue, but it only happens with Internet 
Explorer. So I figured it must be a bug on the internet explorer side 
and not on the HAProxy side since internet explorer doesn't seem to 
encode the URL correctly.


Greets,

Sander



Re: Haproxy and UTF8-encoded chars

2012-07-26 Thread Sander Klein

On 26.07.2012 09:44, Stojan Rancic (Iprom) wrote:

On 25.7.2012 11:21, Sander Klein wrote:

We are experiencing the same issue, but it only happens with 
Internet
Explorer. So I figured it must be a bug on the internet explorer 
side

and not on the HAProxy side since internet explorer doesn't seem to
encode the URL correctly.


I'm afraid I don't have any control over what browsers the users are
using, and I'm sure a fair amount of those are IE . And the fact that
I'm seeing \x escaped characters in both GET and Referrer headers
isn't helping any either.

How do you deal with IE users then ?


This is always a bit problematic.

If the URL is being generated from our software then we fix our 
software to create pre-encoded URLs. If it's 3rd party software, we tell 
the 3rd party to fix their stuff.


Currently we have one case where the 3rd party doesn't understand the 
issue, and then we just tell the users to start using a browser which 
does proper encoding.


Because of your question I wiresharked a bit yesterday to make sure I 
was giving you the right info. My tests showed that Safari, Firefox and 
Chrome do proper encoding of the URL before sending it and Internet 
Explorer only encodes some parts of the URL.


I also check RFC3986 and it says in section 2.5 paragraph 6:

---
When a new URI scheme defines a component that represents textual
data consisting of characters from the Universal Character Set [UCS],
the data should first be encoded as octets according to the UTF-8
character encoding [STD63]; then only those octets that do not
correspond to characters in the unreserved set should be percent-
encoded.  For example, the character A would be represented as A,
the character LATIN CAPITAL LETTER A WITH GRAVE would be represented
as %C3%80, and the character KATAKANA LETTER A would be represented
as %E3%82%A2.
---

So I definitely think Internet Explorer is doing it wrong. It relies on 
the fact that most web servers will encode the URL for them, which most 
actually do


If you really want to accept the 'bad' URLs then you might enable 
'option accept-invalid-http-request' but I strongly recommend to not 
enable this in a production environment.


Greets,

Sander Klein



Re: unsubscribe

2012-09-21 Thread Sander Klein

no no no... isn't that cute, but it's wrong!

It says:
Subscribe to the list : haproxy+subscr...@formilux.org
Unsubscribe from the list : haproxy+unsubscr...@formilux.org

so mailing to haproxy+unsubscr...@formilux.org should do the trick...


On 21.09.2012 19:10, Svancara, Randall wrote:

Unsubscribe

- Reply message -
 From: Svancara, Randall rsvanc...@wsu.edu
 To: Fahd Sultan fsul...@brilig.com, haproxy@formilux.org
haproxy@formilux.org
 Subject: unsubscribe
 Date: Fri, Sep 21, 2012 9:58 am

Yeah, I have been wondering how to do this for years. Too bad that I
can not find any documentation on how to remove myself.

Randall Svancara

High Performance Computing Systems Administrator

Washington State University

509-335-3039

FROM: Fahd Sultan [mailto:fsul...@brilig.com]
 SENT: Friday, September 21, 2012 9:54 AM
 TO: haproxy@formilux.org
 SUBJECT: unsubscribe

unsubscribe

--
 Fahd Sultan | Director, IT Infrastructure
 Brilig - Powering Free Market Advertising™
 fsul...@brilig.com | +1.347.878.5826





Bug in 1.5-dev15, dev-14 and maybe lower?

2012-12-12 Thread Sander Klein

Hi All,

I recently upgraded to HAProxy dev-14 (and since this morning dev-15) 
from dev11-ss-20120604. But, now we are experiencing uploads that are 
'hanging'.


When uploading a file over HTTP the upload suddenly stalls. I cant get 
my finger on it why. Sometimes it is right after the upload starts, 
sometimes somewhere in the middle and (surprise, surprise!) sometimes 
almost at the end. After a while the upload continues again and finishes 
or stalls again. Uploads usually go with higher speeds 200-300mbit/s so 
HAProxy CPU usage goes up a bit (10-15% cpu usage).


Is this a bug in HAProxy or is it my config? Downgrading to 
dev11-ss-20120604 fixes the issue.


Greets,

Sander Klein

My config:
###
# Global Settings
###
global
log 127.0.0.1 local0
#   log 127.0.0.1 local0 notice
#   log 127.0.0.1 local0 err
#   log 127.0.0.1 local1 debug

daemon
userhaproxy
group   haproxy
maxconn 32768
spread-checks   3
stats socket/var/run/haproxy.stat mode 666 level admin

#debug
#quiet

###
# Defaults
###
defaults
log global
timeout check   2s
timeout client  60s
timeout connect 10s
timeout http-keep-alive 30s
timeout http-request30s
timeout queue   30s
timeout server  60s
timeout tarpit  120s

errorfile 400 /etc/haproxy/errors.loc/400.http
errorfile 403 /etc/haproxy/errors.loc/403.http
errorfile 500 /etc/haproxy/errors.loc/500.http
errorfile 502 /etc/haproxy/errors.loc/502.http
errorfile 503 /etc/haproxy/errors.loc/503.http
errorfile 504 /etc/haproxy/errors.loc/504.http

###
# Define the admin section
###
listen admin
bindxxx.xxx.xxx.xxx:8080
#   bind:::::xx:8080
modehttp
stats enable
stats uri   /haproxy?stats
stats auth  admin:passwordhere!
stats admin if TRUE
stats refresh 5s

###
# Mass hosting frontend
###
frontend cluster1-in
# Mass hosting VIP
bind x.x.x.x:80
bind :::::xx:80

... more bind stuff...

mode http
maxconn 4096

option httplog
option dontlog-normal
option dontlognull
option forwardfor
option http-server-close
option splice-auto
option tcp-smart-accept

capture request header Host len 64
capture request header User-Agent   len 16
capture request header Content-Length   len 10
capture request header Referer  len 256
capture response header Content-Length  len 10

#
# Some security stuff starts here
#

# block annoying worms that fill the logs...
# deny NULL character, script tag and #removed 
xmlrpc.php#removed in URL's

acl forbidden_uris url_sub -i %00 script

# /../../ attacks
acl forbidden_uris url_reg -i 
(%2f|%5c|/|)(\.|%2e)(\.|%2e)(%2f|%5c|/|)

# Deny requests for following files:
acl forbidden_uris path_end -i /root.exe /cmd.exe /default.ida 
/awstats.pl .dll

# Deny script kiddy stuff eating our connections
acl forbidden_uris url_sub -f 
/etc/haproxy/filters/phpmyadmin.txt

block if forbidden_uris

# HTTP content smugling
acl forbidden_hdrs hdr_cnt(host) gt 1
acl forbidden_hdrs hdr_cnt(content-length) gt 1
acl forbidden_hdrs hdr_cnt(proxy-authorization) gt 0
block if forbidden_hdrs

# Block offensive User-Agents
acl offender hdr_sub(User-Agent) -i msnbot
acl offender hdr_sub(User-Agent) -i baiduspider
block if offender

# Remove bogus X-Forwarded-For headers
# We don't care about RFC1918
reqidel ^X-Forwarded-For:\ xxx\.xxx\.xxx
... more reqidel's like the above...

# Add X-Forwarded-Proto headers
acl no-ssl dst_port 80
reqadd X-Forwarded-Proto:\ http if no-ssl

# Web cluster
acl iscluster1-1  hdr(host) -f /etc/haproxy/cluster1-1.txt
acl iscluster1-2  hdr(host) -f /etc/haproxy/cluster1-2.txt
acl iscluster1-2  hdr_sub(host) -i some.domain
acl iscluster1-2  hdr_sub(host) -i other.domain
acl iscluster1-2  hdr_sub(host) -i another.domain

use_backend cluster1-1if iscluster1-1
use_backend cluster1-2if iscluster1-2

default_backend cluster1-1

###
# 1 backend
###
backend cluster1-1
fullconn4096
modehttp

balance roundrobin

option abortonclose
option tcp-smart-connect
option redispatch
option httpchk GET /db.php HTTP/1.0

server

Re: Bug in 1.5-dev15, dev-14 and maybe lower?

2012-12-13 Thread Sander Klein

Hi Willy,

On 12.12.2012 22:53, Willy Tarreau wrote:

Hi Sander,
Could you try to disable the splice options just to see ? And if that 
does
not change anything, please also try to disable option 
abortonclose. That
will help us narrow the issue down. Anyway, I don't see anything 
wrong with

your config.

If you can easily reproduce this, I'd be interested in getting a 
network
traffic capture on the machine running haproxy, I don't know if you 
can

get send this.


I've disabled both splice and abort-on-close and that didn't fix it.

We somehow can't always reproduce this, but chances of triggering it 
are about 50% with some clients and near 0% on others. Pretty weird.


I'll send you the capture in another mail.

Greets,

Sander



Re: Testers wanted : about the stalled POST issues

2012-12-15 Thread Sander Klein

Hi Willy,


On 15.12.2012 09:14, Willy Tarreau wrote:
The bug is somehow very hard to trigger. But, I did manage to 
trigger
the bug with dev15 a couple of times and I have not been able to 
trigger

it with dev15-and-your-patch. So I think your patch fixes the issue.


Thank you very much for testing !

Then I will merge it, at least because it looks more correct to me 
than
the original code, and we'll continue to observe if new bugs are 
reported.




We've been testing a lot more this morning and we still experience 
POST's that are stalling. But, much less than we had with standard dev15 
without your patch. I'm not sure if it is fixed now or we are seeing a 
second issue either with haproxy or with something else.


Greets,

Sander



Rate limit URL or src IP

2013-04-02 Thread Sander Klein

Hi All,

I know this question has been asked more times, but currently I'm 
experiencing some problems with some people harvesting data from our 
websites at high rates. I would like to block them based on the URL or 
simply on src IP.


Currently I've implemented the 'Limiting the HTTP request rate' setup 
from 
http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/ 
which works nice, but now they also start coming in with IPv6. Can I 
modify this setup to also work with IPv6 without creating multiple 
frontends or backends?


Greets,

Sander



Re: Problem with ss-20130402

2013-04-02 Thread Sander Klein

Hi!,

On 02.04.2013 16:16, Sander Klein wrote:


When using this config with ss-20130402 I do not get any traffic to
cluster1-2. I didn't have enough time to do a proper debug since I was
doing it in production ;-) I might have a better look at it this
evening. It works fine with ss-20130125.


Just tried ss-20130326 and this one works good. So I think there's some 
kind of regression in between 20130326 and 20130402.


Any ideas how to start debugging this?

Greets,

Sander



Re: Problem with ss-20130402

2013-04-02 Thread Sander Klein

Replying to myself again...

On 02.04.2013 16:59, Sander Klein wrote:

Hi!,

On 02.04.2013 16:16, Sander Klein wrote:


When using this config with ss-20130402 I do not get any traffic to
cluster1-2. I didn't have enough time to do a proper debug since I 
was

doing it in production ;-) I might have a better look at it this
evening. It works fine with ss-20130125.


Just tried ss-20130326 and this one works good. So I think there's
some kind of regression in between 20130326 and 20130402.

Any ideas how to start debugging this?


While wiresharking around a bit it seems the connection to the backend 
servers just 'hangs'. There's no traffic flowing at all. Just thought 
I'd share it here in case anybody cares ;-)


Greets,

Sander



Re: Problem with ss-20130402

2013-04-02 Thread Sander Klein

Hi Thomas,

On 02.04.2013 21:02, Thomas Heil wrote:

Of course, it matters. As you explained the problem should be arround
patch 86  up to 101. How does you haproxy -vv
look like? Do you use compression or SSL? Could you eliminate Patch
91,92 and 98?


haproxy -vv looks like:

sander@lb01-a:~$ /usr/sbin/haproxy -vv
HA-Proxy version 1.5-dev17 2012/12/28
Copyright 2000-2012 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 
USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 
200


Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3.4
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 0.9.8o 01 Jun 2010
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

I do not use SSL or compression in my current config. I was actually 
upgrading to the latest snapshot to start using SSL :-)


I've recompiled without path 91,92 and 98 but I still see the same 
problem. The websites on the new cluster (nginx) don't load, or 
partially load. And, the sites on the old cluster (apache) behave 
normally.


I'm not sure, but it almost looks like the timing issue I had with 
POST's back in the early dev17 days. Although I'm doing a simple get 
right now.


Greets,

Sander




haproxy-dev18 http-request

2013-04-03 Thread Sander Klein

Hi,

I try to do the following in my haproxy (dev18) config:

http-request set-header X-Forwarded-Proto https if ssl_fc
http-request set-header X-Forwarded-Ssl on if ssl_fc

http-request set-header X-Forwarded-Proto http  if ! ssl_fc
http-request set-header X-Forwarded-Ssl off if ! ssl_fc

But, when I reload I get:

Reloading haproxy: haproxy[ALERT] 092/110441 (22291) : parsing 
[/etc/haproxy/haproxy.cfg:221]: 'http-request set-header' expects 
exactly 2 arguments.
[ALERT] 092/110441 (22291) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

 failed!

I'm a bit at a loss here, since I saw an example somewhere on the 
Exceliance site and if I read the haproxy configuration manual it 
states:


http-request { allow | deny | tarpit | auth [realm realm] | redirect 
rule |

  add-header name fmt | set-header name fmt }
 [ { if | unless } condition ]

I might be interpreting this wrong, but the way I read it using the if 
statement with set-header is legal to use in the config. Am I wrong?


Greets,

Sander



Re: haproxy-dev18 http-request

2013-04-03 Thread Sander Klein

Hmmm, nope, it still doesn't work

I did:

http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Ssl off if !{ ssl_fc }

But this still gives me:

Reloading haproxy: haproxy[ALERT] 092/120655 (9669) : parsing 
[/etc/haproxy/haproxy.cfg:221]: 'http-request set-header' expects 
exactly 2 arguments.
[ALERT] 092/120655 (9669) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

 failed!

Greets,

Sander

On 03.04.2013 11:38, Baptiste wrote:

Hi,

You want to use anonymous ACLs which requires brackets '{' and '}', 
like:


http-request set-header X-Forwarded-Proto https if { ssl_fc }

Baptiste

On Wed, Apr 3, 2013 at 11:15 AM, Sander Klein roe...@roedie.nl 
wrote:



Hi,

I try to do the following in my haproxy (dev18) config:

http-request set-header X-Forwarded-Proto https if ssl_fc
http-request set-header X-Forwarded-Ssl on if ssl_fc

http-request set-header X-Forwarded-Proto http  if ! ssl_fc
http-request set-header X-Forwarded-Ssl off if ! ssl_fc

But, when I reload I get:

Reloading haproxy: haproxy[ALERT] 092/110441 (22291) : parsing 
[/etc/haproxy/haproxy.cfg:221]: 'http-request set-header' expects 
exactly 2 arguments.
[ALERT] 092/110441 (22291) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

 failed!

I'm a bit at a loss here, since I saw an example somewhere on the 
Exceliance site and if I read the haproxy configuration manual it 
states:


http-request { allow | deny | tarpit | auth [realm realm] | 
redirect rule |

              add-header name fmt | set-header name fmt }
             [ { if | unless } condition ]

I might be interpreting this wrong, but the way I read it using the 
if statement with set-header is legal to use in the config. Am I 
wrong?


Greets,

Sander




Re: haproxy-dev18 http-request

2013-04-03 Thread Sander Klein

Hey Thomas,

That's indeed what I had, but the http-request directive seemed more 
efficient. And, because 
http://blog.exceliance.fr/2013/02/26/ssl-offloading-impact-on-web-applications/ 
stated it was possible I thought it would be a good idea to use it :-)


Greets,

Sander

On 03.04.2013 12:37, Thomas Heil wrote:

Hi,

 Why not using something like,

 reqidel ^X-Forwarded-Proto:.*
 reqadd X-Forwarded-Proto: https if { ssl_fc }
 reqadd X-Forwarded-Proto: http if ! { ssl_fc }

 cheers
 thomas

 On 03.04.2013 12:26, Baptiste wrote:


Ah sorry, I misread!

http-request set-header X-Frontend-SSL %[ssl_fc] https

%[ssl_fc] will be 0 in case of HTTP and 1 in case of SSL.

You can't setup an ACL after the set-header directive.

Baptiste

On Wed, Apr 3, 2013 at 12:09 PM, Sander Klein roe...@roedie.nl 
wrote:



Hmmm, nope, it still doesn't work

I did:

http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Ssl off if !{ ssl_fc }

But this still gives me:

Reloading haproxy: haproxy[ALERT] 092/120655 (9669) : parsing 
[/etc/haproxy/haproxy.cfg:221]: 'http-request set-header' expects 
exactly 2 arguments.
[ALERT] 092/120655 (9669) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

failed!

Greets,

Sander

On 03.04.2013 11:38, Baptiste wrote:


Hi,

You want to use anonymous ACLs which requires brackets '{' and '}', 
like:


http-request set-header X-Forwarded-Proto https if { ssl_fc }

Baptiste

On Wed, Apr 3, 2013 at 11:15 AM, Sander Klein roe...@roedie.nl 
wrote:



Hi,

I try to do the following in my haproxy (dev18) config:

http-request set-header X-Forwarded-Proto https if ssl_fc
http-request set-header X-Forwarded-Ssl on if ssl_fc

http-request set-header X-Forwarded-Proto http if ! ssl_fc
http-request set-header X-Forwarded-Ssl off if ! ssl_fc

But, when I reload I get:

Reloading haproxy: haproxy[ALERT] 092/110441 (22291) : parsing 
[/etc/haproxy/haproxy.cfg:221]: 'http-request set-header' expects 
exactly 2 arguments.
[ALERT] 092/110441 (22291) : Error(s) found in configuration file 
: /etc/haproxy/haproxy.cfg

failed!

I'm a bit at a loss here, since I saw an example somewhere on the 
Exceliance site and if I read the haproxy configuration manual it 
states:


http-request { allow | deny | tarpit | auth [realm realm] | 
redirect rule |

add-header name fmt | set-header name fmt }
[ { if | unless } condition ]

I might be interpreting this wrong, but the way I read it using 
the if statement with set-header is legal to use in the config. Am 
I wrong?


Greets,

Sander




Re: haproxy-dev18 http-request

2013-04-03 Thread Sander Klein

On 03.04.2013 14:20, Willy Tarreau wrote:

On Wed, Apr 03, 2013 at 12:09:37PM +0200, Sander Klein wrote:

Hmmm, nope, it still doesn't work

I did:

http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Ssl off if !{ ssl_fc }


OK the bug was there from the beginning (1.5-dev16) and affects
both set-header and add-header. They control that no more word
is present on the line so they reject the if and unless
keywords...

I'm attaching the fix.


Great! I'll try the fix today.

Greets,

Sander



Re: haproxy-dev18 http-request

2013-04-03 Thread Sander Klein

On 03.04.2013 14:20, Willy Tarreau wrote:

On Wed, Apr 03, 2013 at 12:09:37PM +0200, Sander Klein wrote:

Hmmm, nope, it still doesn't work

I did:

http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Ssl off if !{ ssl_fc }


OK the bug was there from the beginning (1.5-dev16) and affects
both set-header and add-header. They control that no more word
is present on the line so they reject the if and unless
keywords...

I'm attaching the fix.


Yay! It works ;-)

Greets,

Sander



RE: dev18 splice-auto

2013-04-05 Thread Sander Klein

Hi Lukas,

On 05.04.2013 12:00, Lukas Tribus wrote:

Whats is the percentage of requests failing this way?


I'm not sure. But I think it's less than 1%. We do a couple of 100's 
request per second and about every second I see one failed request.


Do you know if this is an issue introduced by a certain haproxy build, 
and thus

was working previously, or did you only recently enable splice-auto?
Are you able to reproduce this in dev17 or in stable 1.4.23 (but you 
probably

rely on 1.5 features)?


I cannot try 1.4 because I indeed rely on 1.5 features. But I did try 
dev18 and dev17-ss-20130125. Both give the same problems. I cannot go 
any further back because I had some issues with versions before 20130125 
if I recall correctly. I'm not sure what is was anymore :-)


Can you remove splice-auto, and check whether splice-request or 
splice-response

or both are affected?


Using splice-request and splice-response I get the same issue.
Using splice-request gives no problems.
Using splice-response I get the issue again.

Do you see this in a lab setup as well or do you need to troubleshoot 
this

with production services?


I do not have a big lab setup in which I can reproduce this.

Are you able to tcpdump an affected session (both front and backend 
traffic)?


It is possible to do that, but only if really necessary. And I probably 
only want to share that with direct HAProxy developers.



I use kernel 3.2.40 with grsec patch


Any kernel messages in dmesg?


Nope, not anything out of the ordinary.

Do you have the possibility to install a stable but recent vanilla 
kernel
from kernel.org (I suppose 3.8.5 would be a good choice)? This may as 
well

be a kernel issue.


Vanilla 3.2.X would be possible, anything else is a bit more 
problematic. Not impossible, but I only want to do that if everything 
else fails.


Greets,

Sander



Re: dev18 splice-auto

2013-04-06 Thread Sander Klein
Heh, I didn't have time to test the previous one, but I'll test this one this 
evening. 

Greets,

Sander

On 6 apr. 2013, at 11:50, Willy Tarreau w...@1wt.eu wrote:

 Hi Sander,
 
 the patch I proposed was not enough, it only fixed a few of the
 occurrences. The issue was introduced in dev12 with the connection
 rework.
 
 Please use the attached patch, which I have tested to fix the issue here
 and merged.
 
 The issue mainly happens with chunked-encoded responses where splice()
 may read more data than expected, causing read() to fail to get the
 chunk size, and aborting the connection.
 
 Regards,
 Willy
 
 0001-BUG-MEDIUM-splicing-is-broken-since-1.5-dev12.patch



Re: dev18 splice-auto

2013-04-06 Thread Sander Klein

On 06.04.2013 11:50, Willy Tarreau wrote:

Hi Sander,

the patch I proposed was not enough, it only fixed a few of the
occurrences. The issue was introduced in dev12 with the connection
rework.

Please use the attached patch, which I have tested to fix the issue 
here

and merged.

The issue mainly happens with chunked-encoded responses where splice()
may read more data than expected, causing read() to fail to get the
chunk size, and aborting the connection.


Just to confirm, using the patch and splice-auto again everything seems 
to be fine.


Thanks again Willy.

Greets,

Sander



Add X-Forwarded-For

2013-05-08 Thread Sander Klein

Hi,

I want to move some websites behind cloudfare. They already add an 
X-Forwarded-For header so I do not want to add it if the request comes 
from cloudfare, but I do want to add it if the request is not from 
cloudfare.


Since both requests will pass through the same frontend I need some 
kind of ACL or whatever.


Is there a way to do this?

Greets,

Sander



Re: Add X-Forwarded-For

2013-05-08 Thread Sander Klein

Replying to myself ;-)

On 08.05.2013 10:52, Sander Klein wrote:

Hi,

I want to move some websites behind cloudfare. They already add an
X-Forwarded-For header so I do not want to add it if the request comes
from cloudfare, but I do want to add it if the request is not from
cloudfare.

Since both requests will pass through the same frontend I need some
kind of ACL or whatever.

Is there a way to do this?


I know I can use 'option forwardfor except [network]' but cloudfare 
uses a lot of networks.


Greets,

Sander



Re: Add X-Forwarded-For

2013-05-08 Thread Sander Klein

Hey,


You have the optional argument if-none for option forwardfor,
but you should not do this with external proxies whose addresses
you don't know because anyone could pass one and fool you.


This doesnt feel like a good option ;-)


In practice you would need them to pass you some information to
prove the request comes from them. The best way to do this is to
do it over ssl.


Well, I know which networks they are using since the provide them on 
their website. That might be prove enough


I didn't test if it's possible to do 'option forwardfor except 
192.168.1.0/24 192.168.2.0/24 etc...'


Even better would be to load it from a file.

Maybe the option from Finn Arne Gangstad might prove good enough for me 
and I can fix it with some reqidel statements.


Greets,

Sander



Re: Add X-Forwarded-For

2013-05-08 Thread Sander Klein
Thanks everyone for answering. I'll play around a bit with my config and the 
suggestions. 

Greets,

Sander

On 8 mei 2013, at 15:04, Willy Tarreau w...@1wt.eu wrote:

 On Wed, May 08, 2013 at 08:29:15AM -0400, John Marrett wrote:
 The definitive list of cloudflare IPs doesn't appear to be too unmanageable:
 
 https://www.cloudflare.com/ips
 
 They also provide convenient text files that just contain the IP address
 lists for easy automation.
 
 As Lukas says if you do not validate the IP addresses it's trivial for
 anyone to forge client IP addresses.
 
 I agree, and indeed the list is very small, I thought it was much larger,
 as akamai's which are much harder to deal with.
 
 I think the following method should work, though I have not tested it :
 
acl from_cf src -f cf-ips.txt   # list of cf's addresses, one per line
reqidel ^x-forwarded-for: if !from_cf
option forwardfor if-none
 
 It is supposed to remove xff from requests not coming from CF, and to add
 one only when there is none, which should do the trick.
 
 Willy
 
 



Possible bug with compression

2013-05-23 Thread Sander Klein

Hi,

I think I've found a possible bug with the combination SSL, compression 
and NTLM auth. But, I'm not sure if it's really a bug or if NTLM auth is 
crap (well it is...).


When enabling compression the authorization fails sometimes. When I 
disable compression everything is fine. I don't know if it's just a 
silly thing to enable compression in this situation. Has anyone else 
tried this?


I'm running haproxy-dev18-ss-20130512 and my config is like:

defaults
  log global

  mode http

  compression algo gzip

  option http-server-close
  option tcp-smart-accept
  option tcp-smart-connect
  option abortonclose

frontend default-fe
  bind 1.2.3.4:80
  bind a:b:c:d:e:f:80
  bind 1.2.3.4:443 ssl crt /etc/haproxy/ssl/some.pem ciphers 
RC4:HIGH:!aNULL:!MD5
  bind a:b:c:d:e:f:443 ssl crt /etc/haproxy/ssl/some.pem ciphers 
RC4:HIGH:!aNULL:!MD5


  maxconn 512

  option httplog
  option forwardfor
  option splice-auto

  # Add X-Forwarded-* headers
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Ssl on if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
  http-request set-header X-Forwarded-Ssl off if ! { ssl_fc }

  # Define hosts which need to redirect to HTTPS
  acl need_ssl hdr(Host) -i iis.host.local

  redirect scheme https if need_ssl ! { ssl_fc }

  # Define backends and redirect correct hostnames
  use_backend iis-backend if { hdr(Host) -i iis.host.local }

backend iis-backend
  fullconn 20

  no option http-server-close
  option httpchk GET / HTTP/1.0

  server iis-stuff 2.3.4.5:80 cookie iis check inter 2000


Regard,

Sander




Re: Possible bug with compression

2013-05-26 Thread Sander Klein

Hey Baptiste,

Thanks for your answer. Just to be sure: if I do 'option 
http-server-close' in the defaults section and then use 'no option 
http-server-close' in the backend, the option is disabled for 
connections to that backend, right?


I know http-server-close is not compatible with NTLM but I also have 
backends to different servers which can use http-server-close. So I juse 
disable it for certain backends.


Compression not being compatible with tunnel mode sounds good enough to 
me. If it's known then it's fine with me ;-)


Regards,

Sander

On 26.05.2013 16:04, Baptiste wrote:

Hi,

Your configuration is not compatible with NTLM.
NTLM requires the connection remains available over the time or
authentication is broken.
When you enable http-server-close, haproxy will change the connection
for each HTTP request.
So disable it, you'll pass in the tunnel mode.

That said, I'm almost sure compression is not compatible with tunnel 
mode.


Baptiste


On Thu, May 23, 2013 at 10:44 AM, Sander Klein roe...@roedie.nl 
wrote:

Hi,

I think I've found a possible bug with the combination SSL, 
compression and
NTLM auth. But, I'm not sure if it's really a bug or if NTLM auth is 
crap

(well it is...).

When enabling compression the authorization fails sometimes. When I 
disable
compression everything is fine. I don't know if it's just a silly 
thing to

enable compression in this situation. Has anyone else tried this?

I'm running haproxy-dev18-ss-20130512 and my config is like:

defaults
  log global

  mode http

  compression algo gzip

  option http-server-close
  option tcp-smart-accept
  option tcp-smart-connect
  option abortonclose

frontend default-fe
  bind 1.2.3.4:80
  bind a:b:c:d:e:f:80
  bind 1.2.3.4:443 ssl crt /etc/haproxy/ssl/some.pem ciphers
RC4:HIGH:!aNULL:!MD5
  bind a:b:c:d:e:f:443 ssl crt /etc/haproxy/ssl/some.pem ciphers
RC4:HIGH:!aNULL:!MD5

  maxconn 512

  option httplog
  option forwardfor
  option splice-auto

  # Add X-Forwarded-* headers
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Ssl on if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
  http-request set-header X-Forwarded-Ssl off if ! { ssl_fc }

  # Define hosts which need to redirect to HTTPS
  acl need_ssl hdr(Host) -i iis.host.local

  redirect scheme https if need_ssl ! { ssl_fc }

  # Define backends and redirect correct hostnames
  use_backend iis-backend if { hdr(Host) -i iis.host.local }

backend iis-backend
  fullconn 20

  no option http-server-close
  option httpchk GET / HTTP/1.0

  server iis-stuff 2.3.4.5:80 cookie iis check inter 2000


Regard,

Sander






Re: LB Layout Question

2013-06-01 Thread Sander Klein

Hi,

On 01.06.2013 03:09, Brendon Colby wrote:
On Wed, May 29, 2013 at 6:46 AM, joris dedieu joris.ded...@gmail.com 
wrote:



Hi Syd,

I'm guessing an an NFS share from the 2 webservers to the 1 
fileserver. However, from a bit of research with load balanced 
magento setups there seems to be a lot of negative comments about 
using NFS in this way.


It's always better to avoid NFS as it introduce a point of failure.


It isn't always better. We have several TB of heavily accessed static
media files being served to our web servers over NFS. I don't think we
could do this another way viably.

If the NFS server is built with redundant power, ECC memory, RAID and
is connected to a UPS, I wouldn't be too worried about using NFS as
long as the hardware is properly chosen to accommodate the workload
and the OS and NFS are set up correctly.


I can certainly agree with this. We have a couple of 100TB's which are 
served using multiple NFS shares. If the NFS server is chosen wisely, 
then there's no problem. You can use 2 NFS servers with a dual 
controller SAS shelf underneath if you're woried about the NFS server 
failing.


I'm almost scared to say this, but we never had a failure due to our 
NFS servers being unavailable or being swamped with requests. So I also 
think it's a very viable option. You just had to think things through a 
bit.


Greets,

Sander



Re: ssl sni and client certificate verification

2013-07-02 Thread Sander Klein

On 02.07.2013 10:39, Hudec Peter wrote:

Thanks Lukas,

I will try 1.5 version.

But for Debian this version is in experimental now ;( I will look if 
some

already done for Wheezy.


I have 1.5 packages for amd64 on my site. They are based on the 
packaging done by Vincent Bernat. They Work For Me (tm)


Look at http://www.roedie.nl/downloads/haproxy/

I also put snapshots there every once in a while if I hit a bug which 
bothers me. Which hasn't happened for some time now...


Greets,

Sander



Re: SSL problem with old browsers

2013-07-08 Thread Sander Klein
Hi

I think this is just related to ie 8 on windows xp not supporting SNI. But I 
could be wrong. 

Greets,
Sander

On 8 jul. 2013, at 18:50, Jürgen Haas juer...@paragon-es.de wrote:

 This is a follow-up question to the other thread SSL Problem -
 Untrusted Connection which has meanwhile been resolved, thanks to Lukas
 and Duncan. My PEM files are now working properly.
 
 Here is what I have in the config file:
 
 frontend https-in
  bind :443 ssl crt /var/proxy/certs/fallback.pem crt 
 /var/proxy/certs/domain1.pem crt /var/proxy/certs/domain2.pem
  use_backend ssl_backend
 
 Now, when calling https://domain1 this works from all modern platforms
 and browsers. But a lot customers with older equipment (i.e. most of
 them from within banking networks - no kidding) are reporting that their
 browser (IE8 on XP as an example) is warning them when visiting domain1
 on SSL. As I couldn't reproduce that problem from elsewhere, I just
 installed XP and IE8 and bang, yes I get the same warning.
 
 What happens is that HAProxy is using the fallback certificate.
 
 When I remove that and only have this config:
 
 frontend https-in
  bind :443 ssl crt /var/proxy/certs/domain1.pem
  use_backend ssl_backend
 
 Then everything works also on older systems.
 
 I think, from that we can assume that the certificates are just fine.
 But something with HAProxy seems not quite right for all circumstances
 if there are more than one CRTs in one bind statement.
 
 If anyone needed an environment for testing and reproduction, please let
 me know. I can provide more infos or even access to our system if that's
 necessary.
 
 Thanks
 Jürgen
 
 
 



webdav

2013-10-09 Thread Sander Klein

Hi,

Is it possible to use webdav with haproxy while in http mode? Or dou I 
have to use tcp mode for that?


Regards,

Sander



Re: webdav

2013-10-09 Thread Sander Klein

Hey Baptiste,

We want to use it in front of svn and git. We wont actually do any load 
balancing with it. We just want to use haproxy for a single entry point 
to the repositories.


Greets,

Sander

On 09.10.2013 09:31, Baptiste wrote:

Hi Sander,

As long as webdav respect HTTP RFC, there won't be any issues at all.

Which product are you targeting for your webdav deployment?

Baptiste

On Wed, Oct 9, 2013 at 8:57 AM, Sander Klein roe...@roedie.nl wrote:

Hi,

Is it possible to use webdav with haproxy while in http mode? Or dou 
I have

to use tcp mode for that?

Regards,

Sander





Re: webdav

2013-10-10 Thread Sander Klein

Wicked, thanks for your answer.

Sander

On 10.10.2013 00:03, Bryan Talbot wrote:

I've used it in front of SVN running in apache httpd and proxy in
http mode with ssl.  works great.

-Bryan

 On Wed, Oct 9, 2013 at 1:59 AM, Sander Klein roe...@roedie.nl 
wrote:



Hey Baptiste,

We want to use it in front of svn and git. We wont actually do any 
load balancing with it. We just want to use haproxy for a single entry 
point to the repositories.


Greets,

Sander

On 09.10.2013 09:31, Baptiste wrote:


Hi Sander,

As long as webdav respect HTTP RFC, there won't be any issues at 
all.


Which product are you targeting for your webdav deployment?

Baptiste

On Wed, Oct 9, 2013 at 8:57 AM, Sander Klein roe...@roedie.nl 
wrote:



Hi,

Is it possible to use webdav with haproxy while in http mode? Or 
dou I have

to use tcp mode for that?

Regards,

Sander




glibc double free or corruption with 1.5-dev20

2013-12-16 Thread Sander Klein

Hi,

I've compiled 1.5-dev20 on debian wheezy and now I get a double free or 
corruption bug. Haproxy will not start.


*** glibc detected *** /usr/sbin/haproxy: double free or corruption 
(fasttop): 0x03c5a880 ***

=== Backtrace: =
/lib/x86_64-linux-gnu/libc.so.6(+0x76d76)[0x6853e222fd76]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x6c)[0x6853e2234aac]
/usr/sbin/haproxy[0x466c36]
/usr/sbin/haproxy[0x467224]
/usr/sbin/haproxy[0x460ddd]
/usr/sbin/haproxy[0x46129e]
/usr/sbin/haproxy[0x418549]
/usr/sbin/haproxy[0x421472]
/usr/sbin/haproxy[0x407f2a]
/usr/sbin/haproxy[0x406639]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd)[0x6853e21d7ead]
/usr/sbin/haproxy[0x4071fd]
=== Memory map: 
0040-00496000 r-xp  08:05 65203  
/usr/sbin/haproxy
00695000-0069d000 rw-p 00095000 08:05 65203  
/usr/sbin/haproxy

0069d000-006a9000 rw-p  00:00 0
006a9000-03b8e000 ---p  00:00 0
03b8e000-03c68000 rw-p  00:00 0  
[heap]

6853dc00-6853dc021000 rw-p  00:00 0
6853dc021000-6853e000 ---p  00:00 0
6853e1568000-6853e157d000 r-xp  08:02 211757 
/lib/x86_64-linux-gnu/libgcc_s.so.1
6853e157d000-6853e177d000 ---p 00015000 08:02 211757 
/lib/x86_64-linux-gnu/libgcc_s.so.1
6853e177d000-6853e177e000 rw-p 00015000 08:02 211757 
/lib/x86_64-linux-gnu/libgcc_s.so.1
6853e177e000-6853e1789000 r-xp  08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6853e1789000-6853e1988000 ---p b000 08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6853e1988000-6853e1989000 r--p a000 08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6853e1989000-6853e198a000 rw-p b000 08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6853e198a000-6853e1994000 r-xp  08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6853e1994000-6853e1b93000 ---p a000 08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6853e1b93000-6853e1b94000 r--p 9000 08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6853e1b94000-6853e1b95000 rw-p a000 08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6853e1b95000-6853e1baa000 r-xp  08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so
6853e1baa000-6853e1da9000 ---p 00015000 08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so
6853e1da9000-6853e1daa000 r--p 00014000 08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so
6853e1daa000-6853e1dab000 rw-p 00015000 08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so

6853e1dab000-6853e1dad000 rw-p  00:00 0
6853e1dad000-6853e1db4000 r-xp  08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6853e1db4000-6853e1fb3000 ---p 7000 08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6853e1fb3000-6853e1fb4000 r--p 6000 08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6853e1fb4000-6853e1fb5000 rw-p 7000 08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6853e1fb5000-6853e1fb7000 r-xp  08:02 211807 
/lib/x86_64-linux-gnu/libdl-2.13.so
6853e1fb7000-6853e21b7000 ---p 2000 08:02 211807 
/lib/x86_64-linux-gnu/libdl-2.13.so
6853e21b7000-6853e21b8000 r--p 2000 08:02 211807 
/lib/x86_64-linux-gnu/libdl-2.13.so
6853e21b8000-6853e21b9000 rw-p 3000 08:02 211807 
/lib/x86_64-linux-gnu/libdl-2.13.so
6853e21b9000-6853e2339000 r-xp  08:02 211866 
/lib/x86_64-linux-gnu/libc-2.13.so
6853e2339000-6853e2539000 ---p 0018 08:02 211866 
/lib/x86_64-linux-gnu/libc-2.13.so
6853e2539000-6853e253d000 r--p 0018 08:02 211866 
/lib/x86_64-linux-gnu/libc-2.13.so
6853e253d000-6853e253e000 rw-p 00184000 08:02 211866 
/lib/x86_64-linux-gnu/libc-2.13.so

6853e253e000-6853e2543000 rw-p  00:00 0
6853e2543000-6853e257f000 r-xp  08:02 211948 
/lib/x86_64-linux-gnu/libpcre.so.3.13.1
6853e257f000-6853e277f000 ---p 0003c000 08:02 211948 
/lib/x86_64-linux-gnu/libpcre.so.3.13.1
6853e277f000-6853e278 rw-p 0003c000 08:02 211948 
/lib/x86_64-linux-gnu/libpcre.so.3.13.1
6853e278-6853e2782000 r-xp  08:05 978315 
/usr/lib/x86_64-linux-gnu/libpcreposix.so.3.13.1
6853e2782000-6853e2981000 ---p 2000 08:05 978315 
/usr/lib/x86_64-linux-gnu/libpcreposix.so.3.13.1
6853e2981000-6853e2982000 rw-p 1000 08:05 978315 

Re: glibc double free or corruption with 1.5-dev20

2013-12-16 Thread Sander Klein

On , Willy Tarreau wrote:

Hi Sander,

On Mon, Dec 16, 2013 at 09:43:07AM +0100, Sander Klein wrote:

Hi,

I've compiled 1.5-dev20 on debian wheezy and now I get a double free 
or

corruption bug. Haproxy will not start.


Interesting, I never experienced this one. Could you please run it 
through

gdb and issue bt full ?

Otherwise if you can send me privately the config you use to reproduce
this, without sensitive information, it would be great!


Hmmm, I think something is not right here. I do have debugging symbols 
in the binary but I get nothing AFAICS. Am I doing something wrong here? 
Or is the SIGABRT the problem?


I'll send you my config.

GNU gdb (GDB) 7.4.1-debian
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
http://gnu.org/licenses/gpl.html

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show 
copying

and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /home/sander/haproxy...done.
(gdb) run -f /etc/haproxy/haproxy.cfg -D
Starting program: /home/sander/haproxy -f /etc/haproxy/haproxy.cfg -D
warning: no loadable sections found in added symbol-file system-supplied 
DSO at 0x6d43ce93c000
*** glibc detected *** /home/sander/haproxy: double free or corruption 
(fasttop): 0x00fe3b90 ***

=== Backtrace: =
/lib/x86_64-linux-gnu/libc.so.6(+0x76d76)[0x6d43cd53bd76]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x6c)[0x6d43cd540aac]
/home/sander/haproxy[0x466c36]
/home/sander/haproxy[0x467224]
/home/sander/haproxy[0x460ddd]
/home/sander/haproxy[0x46129e]
/home/sander/haproxy[0x418549]
/home/sander/haproxy[0x421472]
/home/sander/haproxy[0x407f2a]
/home/sander/haproxy[0x406639]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd)[0x6d43cd4e3ead]
/home/sander/haproxy[0x4071fd]
=== Memory map: 
0040-00496000 r-xp  08:08 6635592
/home/sander/haproxy
00695000-0069d000 rw-p 00095000 08:08 6635592
/home/sander/haproxy

0069d000-006a9000 rw-p  00:00 0
006a9000-00f18000 ---p  00:00 0
00f18000-01002000 rw-p  00:00 0  
[heap]

6d43c800-6d43c8021000 rw-p  00:00 0
6d43c8021000-6d43cc00 ---p  00:00 0
6d43cc874000-6d43cc889000 r-xp  08:02 211757 
/lib/x86_64-linux-gnu/libgcc_s.so.1
6d43cc889000-6d43cca89000 ---p 00015000 08:02 211757 
/lib/x86_64-linux-gnu/libgcc_s.so.1
6d43cca89000-6d43cca8a000 rw-p 00015000 08:02 211757 
/lib/x86_64-linux-gnu/libgcc_s.so.1
6d43cca8a000-6d43cca95000 r-xp  08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6d43cca95000-6d43ccc94000 ---p b000 08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6d43ccc94000-6d43ccc95000 r--p a000 08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6d43ccc95000-6d43ccc96000 rw-p b000 08:02 211810 
/lib/x86_64-linux-gnu/libnss_files-2.13.so
6d43ccc96000-6d43ccca r-xp  08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6d43ccca-6d43cce9f000 ---p a000 08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6d43cce9f000-6d43ccea r--p 9000 08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6d43ccea-6d43ccea1000 rw-p a000 08:02 211924 
/lib/x86_64-linux-gnu/libnss_nis-2.13.so
6d43ccea1000-6d43cceb6000 r-xp  08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so
6d43cceb6000-6d43cd0b5000 ---p 00015000 08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so
6d43cd0b5000-6d43cd0b6000 r--p 00014000 08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so
6d43cd0b6000-6d43cd0b7000 rw-p 00015000 08:02 211919 
/lib/x86_64-linux-gnu/libnsl-2.13.so

6d43cd0b7000-6d43cd0b9000 rw-p  00:00 0
6d43cd0b9000-6d43cd0c r-xp  08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6d43cd0c-6d43cd2bf000 ---p 7000 08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6d43cd2bf000-6d43cd2c r--p 6000 08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6d43cd2c-6d43cd2c1000 rw-p 7000 08:02 211824 
/lib/x86_64-linux-gnu/libnss_compat-2.13.so
6d43cd2c1000-6d43cd2c3000 r-xp  08:02 211807 
/lib/x86_64-linux-gnu/libdl-2.13.so
6d43cd2c3000-6d43cd4c3000 ---p 2000 08:02 211807 
/lib/x86_64-linux-gnu/libdl-2.13.so
6d43cd4c3000-6d43cd4c4000 r--p 2000 08:02 211807 
/lib/x86_64-linux-gnu

Re: glibc double free or corruption with 1.5-dev20

2013-12-16 Thread Sander Klein

On , Willy Tarreau wrote:

OK here's the fix, it was not a big deal, just a missing NULL
after a free when loading patterns from a file. Thank you for
your quick help Sander!


Something is fishy. I've compiled a new version with your patch, haproxy 
starts but it 'just doesn't work (tm)'.


I know this is a useless vague description but it is the best I have 
right now. I try and have a look later to see why web pages do not load 
with this new haproxy version.


Greets,

Sander



Re: glibc double free or corruption with 1.5-dev20

2013-12-16 Thread Sander Klein

On , Sander Klein wrote:

On , Willy Tarreau wrote:

OK here's the fix, it was not a big deal, just a missing NULL
after a free when loading patterns from a file. Thank you for
your quick help Sander!


Something is fishy. I've compiled a new version with your patch,
haproxy starts but it 'just doesn't work (tm)'.

I know this is a useless vague description but it is the best I have
right now. I try and have a look later to see why web pages do not
load with this new haproxy version.


Replying to myself a bit.

All connections seem to get status CQ. Haproxy 1.5-ss-20131105 doesn't 
have this problem.


Again, I'll try and see if I can get a better description later today or 
tomorrow.


Greets,

Sander



Re: glibc double free or corruption with 1.5-dev20

2013-12-16 Thread Sander Klein

On , Willy Tarreau wrote:

On Mon, Dec 16, 2013 at 01:10:11PM +0100, Sander Klein wrote:

On , Willy Tarreau wrote:
OK here's the fix, it was not a big deal, just a missing NULL
after a free when loading patterns from a file. Thank you for
your quick help Sander!

Something is fishy. I've compiled a new version with your patch, 
haproxy

starts but it 'just doesn't work (tm)'.

I know this is a useless vague description but it is the best I have
right now. I try and have a look later to see why web pages do not 
load

with this new haproxy version.


You should check logs to see if you think the traffic follows the
correct backends. We could indeed imagine an ACL match issue related
to the thing I just fixed.


I see that the correct backends are selected. It looks like this.

Dec 16 13:05:45 localhost haproxy[28322]: x.x.x.x:49389 
[16/Dec/2013:13:05:40.833] cluster1-in cluster1-53/web008 
8/4314/-1/-1/4322 503 1995 - - CQVN 552/401/211/6/0 68/0 
{some.site.com|Mozilla/5.0 
(Win||http://some.site.com/url/goes/here/24?q_searchfield=something} {} 
GET /url/goes/here/36?q_searchfield=something HTTP/1.1


Greets,

Sander



Re: glibc double free or corruption with 1.5-dev20

2013-12-16 Thread Sander Klein

On , Willy Tarreau wrote:

On Mon, Dec 16, 2013 at 02:19:28PM +0100, Sander Klein wrote:

On , Willy Tarreau wrote:
On Mon, Dec 16, 2013 at 01:10:11PM +0100, Sander Klein wrote:
On , Willy Tarreau wrote:
OK here's the fix, it was not a big deal, just a missing NULL
after a free when loading patterns from a file. Thank you for
your quick help Sander!

Something is fishy. I've compiled a new version with your patch,
haproxy
starts but it 'just doesn't work (tm)'.

I know this is a useless vague description but it is the best I have
right now. I try and have a look later to see why web pages do not
load
with this new haproxy version.

You should check logs to see if you think the traffic follows the
correct backends. We could indeed imagine an ACL match issue related
to the thing I just fixed.

I see that the correct backends are selected. It looks like this.

Dec 16 13:05:45 localhost haproxy[28322]: x.x.x.x:49389
[16/Dec/2013:13:05:40.833] cluster1-in cluster1-53/web008
8/4314/-1/-1/4322 503 1995 - - CQVN 552/401/211/6/0 68/0
{some.site.com|Mozilla/5.0
(Win||http://some.site.com/url/goes/here/24?q_searchfield=something} 
{}

GET /url/goes/here/36?q_searchfield=something HTTP/1.1


It indicates the visitor aborts while waiting in the queue, so 
typically
a click on the STOP button while waiting. There are 68 other requests 
in
the backend's queue, 211 connections on the backend and 6 on the 
server.


In your config, I'm seeing a minconn 100 on the server, so the server 
is

not full. The slowstart could possibly limit the accepted concurrency
however. I'll have to see if something changed with slowstart (I'm not
aware of any change there).


H, dev20 does a slowstart when haproxy starts. Dev19 (and before) 
doesn't do that.


It even does a slowstart when I reload the config file. That doesn't 
seem right to me.


Greets,

Sander



haproxy dev21 high cpu usage

2013-12-17 Thread Sander Klein

Hi,

I've enabled http-keep-alive in my config and now haproxy continuously 
peaks at 100% CPU usage where without http-keep-alive it only uses 
10-13% CPU.


Is this normal/expected behavior?

Greets,

Sander




Re: haproxy dev21 high cpu usage

2013-12-17 Thread Sander Klein

On , Willy Tarreau wrote:

On Tue, Dec 17, 2013 at 10:44:12AM +0100, Guillaume Castagnino wrote:

Le mardi 17 décembre 2013 10:32:30 Sander Klein a écrit :
 Hi,

 I've enabled http-keep-alive in my config and now haproxy continuously
 peaks at 100% CPU usage where without http-keep-alive it only uses
 10-13% CPU.

 Is this normal/expected behavior?

Hi,

Indeed, I can confirm this behaviour when enabling server-side
keepalive.


So it looks like the simple idle connection manager I did yesterday
is still not perfect :-/
I tried to trigger this case but could not manage to make it fail,
so I considered that was OK.

Any information to help reproduce it is welcome, of course!


Well, if you still have my config you can replace all http-server-close 
stuff with http-keep-alive and remove the httpclose options which were 
accidentally in there. It happens as soon as I start haproxy so I don't 
know what triggers it.


Greets,

Sander



UDP loadbalancing

2013-12-30 Thread Sander Klein

Hi,

I know haproxy doesn't do UDP loadbalancing, but I figured someone here 
might now A nice tool which can doe this for me. (If haproxy could do it 
it would have been nice though... ;-) )


I've looked at pen but it doesn't seem to do IPV6.

LVS can do the trick but I need to reconfigure a bit to much for my 
taste.


So, are there any other UDP loadbalancers out there?

Regards,

Sander



http-keep-alive broken?

2013-12-30 Thread Sander Klein

Hi,

I'm using haproxy ss-20131229 to reverse proxy some windows iis server 
with ntlm-auth enabled (one of them being exchange 2012).


While I understood that using 'option http-keep-alive' would make 
ntlm-auth work, it doesn't work for me. Are there still some issue with 
http-keep-alive and ntlm-auth?


My config is like:

frontend default-fe
bind x.x.x.x:80
bind 2001:::::1:80
bind x.x.x.x:443 ssl crt /etc/haproxy/ssl/blah.pem crt ciphers 
RC4:HIGH:!aNULL:!MD5
bind 2001:::::1:443 ssl crt 
/etc/haproxy/ssl/blah.pem crt ciphers RC4:HIGH:!aNULL:!MD5


maxconn 512

option httplog
option forwardfor
option splice-auto

# Add X-Forwarded-* headers
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
http-request set-header X-Forwarded-Ssl off if ! { ssl_fc }

# Define hosts which need to redirect to HTTPS
acl need_ssl hdr(Host) -i some.host.com

redirect scheme https if need_ssl ! { ssl_fc }

# Define backends and redirect correct hostnames
use_backend foo if { hdr(Host) -i another.host.com }
use_backend bar if { hdr(Host) -i some.host.com }


###
# backend foo
###
backend foo
fullconn 30
option http-keep-alive
option httpchk GET / HTTP/1.0

server foo x.x.x.x:443 cookie foo check inter 2000

###
# backend bar
###
backend bar
fullconn 30

option http-keep-alive
option httpchk GET / HTTP/1.0

server bar y.y.y.y:443 cookie bar ssl check inter 2000

Greets,

Sander



Re: UDP loadbalancing

2013-12-31 Thread Sander Klein

On , Willy Tarreau wrote:

On Tue, Dec 31, 2013 at 12:44:26AM +0100, Lukas Tribus wrote:

Hi,


 Hi,

 I know haproxy doesn't do UDP loadbalancing, but I figured someone here
 might now A nice tool which can doe this for me. (If haproxy could do it
 it would have been nice though... ;-) )

 I've looked at pen but it doesn't seem to do IPV6.

 LVS can do the trick but I need to reconfigure a bit to much for my
 taste.

 So, are there any other UDP loadbalancers out there?


I suspect the aren't many, because load-balancing UDP via classic 
userspace

software is not very popular.

What application/service/protocol are you trying to load balance?

Any way can do this via ECMP?


In general I see LVS deployed for this. The reason is simple : 
UDP-based
services are generally not proxy-compatible because some IP addresses 
are
implied or transported in the protocol. Thus working in full 
transparent
mode is often the only way to go, and with LVS you can do that in DSR 
mode.

In fact, DNS might be one of the rare exceptions!



I actually do want to balance DNS. Well, actually I want to make it high 
available and since I already have 2 haproxy loadbalancer running I 
figured it would be easy enough to (mis)use them for that.


Greets,

Sander



RE: http-keep-alive broken?

2014-01-02 Thread Sander Klein

On 31.12.2013 00:50, Lukas Tribus wrote:

Hi,

Subject: http-keep-alive broken?

Hi,

I'm using haproxy ss-20131229 to reverse proxy some windows iis server
with ntlm-auth enabled (one of them being exchange 2012).

While I understood that using 'option http-keep-alive' would make
ntlm-auth work, it doesn't work for me. Are there still some issue 
with

http-keep-alive and ntlm-auth?


Honestly I would just use the default tunnel mode for this, so I don't
have to think about the NTLM crap when choosing 
keep-alive/load-balancing

parameters.

If you would like to combine NTLM-auth plus keep-alive, I'd propose 
enabling:

 option prefer-last-server

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-option%20prefer-last-server


Wile I do agree that using tcp-mode would make stuff easier, I also need 
to do some redirecting on the host-header. Which is AFAIK not possible 
while in tcp-mode. (I might be wrong)


I tried moving 'option http-keep-alive' to the frontend section but that 
didn't help. I also used 'option prefer-last-server' but that didn't 
help as well and I think it wouldn't make any difference since it only 
redirects to one server.


The docs say that http-keep-alive should be useful if (quote):

  - when the server is non-HTTP compliant and authenticates the 
connection

instead of requests (eg: NTLM authentication)

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-keep-alive

But as far as I have tested it only breaks NTML auth badly. So, either 
I'm doing something wrong, or haproxy is doing something wrong, or the 
docs are wrong about the NTLM part :-)


Greets,

Sander



Re: http-keep-alive broken?

2014-01-03 Thread Sander Klein

Hi Baptiste, Lukas,

@Lukas: Sorry I misread your tunnel-mode for tcp-mode. Tunnel-mode works 
(almost) fine as you can read below.


I have been investigating my problem a bit more, and then I remembered 
that I also updated haproxy a week before we started using our new 
Windows 2012 servers.


The problem I'm having (also tested with ss-20140101 yesterday) happens 
with http-keep-alive enabled and also when just running in tunnel mode. 
But, when http-keep-alive is enabled I get the problem with ~98% of the 
requests and in tunnel mode I get it with ~10% of the requests. 
Authentication seems to succeed but the connection just 'hangs'. 
Sometimes refreshing 10 times fixes it.


I have downgraded to dev19 this morning and it seems that the problem 
went away in tunnel mode. (http-keep-alive is of course not available)


While I am not sure yet, it could be something broke during dev19-dev21. 
This may sound a bit silly but connections to our IIS servers 'feel 
faster and more responsive' when using dev19.


I will build a small test environment to see if I can reproduce it and 
capture some traffic. Right now it's just a hunch.


My config is below. When I use http-keep-alive I just uncomment the 
'option http-keep-alive' and comment the 'no option http-server-close'.


###
# Global Settings
###
global
log 127.0.0.1 local0

daemon
userhaproxy
group   haproxy
maxconn 32768
spread-checks   3
stats socket/var/run/haproxy.stat mode 666 level admin

###
# Defaults
###
defaults
log global

mode http

option abortonclose

timeout check   2s
timeout client  10s
timeout connect 10s
timeout http-keep-alive 30s
timeout http-request30s
timeout queue   15s
timeout server  10s
timeout tarpit  120s

###
# Define the admin section
###
listen admin
bind X.X.X.1:8080
bind 2001:x:x:x::1:8080
stats enable
stats uri   /haproxy?stats
stats auth  admin:somepass
stats admin if TRUE
stats refresh 5s

###
# Frontend for services
###
frontend default-fe
bind X.X.X.37:80
bind 2001:X:X:X:6:80
bind X.X.X.37:443 ssl crt /etc/haproxy/ssl/cert.pem crt 
/etc/haproxy/ssl/othercert.pem ciphers RC4:HIGH:!aNULL:!MD5
bind 2001:X:X:X::6:443 ssl crt /etc/haproxy/ssl/cert.pem crt 
/etc/haproxy/ssl/othercert.pem ciphers RC4:HIGH:!aNULL:!MD5


option httplog
option forwardfor

# Add X-Forwarded-* headers
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
http-request set-header X-Forwarded-Ssl off if ! { ssl_fc }

# Define hosts which need to redirect to HTTPS
acl need_ssl hdr(Host) -i blah
acl need_ssl hdr(Host) -i host1
acl need_ssl hdr(host) -i host2
acl need_ssl hdr(host) -i host3

redirect scheme https if need_ssl ! { ssl_fc }

# Define backends and redirect correct hostnames
use_backend mgmt if { hdr(Host) -i blah }
use_backend mgmt if { hdr(Host) -i somehost }
use_backend mgmt if { hdr(Host) -i anotherhost }

use_backend app1 if { hdr(Host) -i host1 }

use_backend app2 if { hdr(Host) -i host2 }
use_backend app3 if { hdr(Host) -i host3 }

http-request redirect location http://some.site if { hdr(Host)  
-i something }


###
# backend_mgmt
###
backend mgmt
fullconn 20

option http-server-close
option httpchk GET / HTTP/1.0

server mgmt-01 192.168.1.7:80 cookie mgmt-01 check inter 2000

###
# backend app1
###
backend app1
fullconn 5

no option http-server-close # ONLY USE IF NTLM IS NEEDED!
#   option http-keep-alive
option httpchk GET /url HTTP/1.0

server app1 192.168.1.30:80 cookie app1 check inter 2000

###
# backend app2
###
backend app2
fullconn 512

no option http-server-close # ONLY USE IF NTLM IS NEEDED!
#   option http-keep-alive
option httpchk GET / HTTP/1.0

server app2 192.168.1.46:443 cookie app2 ssl check inter 2000

###
# backend app3
###
backend app3
fullconn 512

no option http-server-close # ONLY USE IF NTLM IS NEEDED!
#   option http-keep-alive
option httpchk GET / HTTP/1.0

server app3 192.168.1.44:443 cookie app3 ssl check inter 2000






RE: http-keep-alive broken?

2014-01-04 Thread Sander Klein

Heyz,

On 03.01.2014 22:52, Lukas Tribus wrote:

Hi,


The problem I'm having (also tested with ss-20140101 yesterday) 
happens
with http-keep-alive enabled and also when just running in tunnel 
mode.
But, when http-keep-alive is enabled I get the problem with ~98% of 
the

requests and in tunnel mode I get it with ~10% of the requests.
Authentication seems to succeed but the connection just 'hangs'.
Sometimes refreshing 10 times fixes it.


Ah, thats interesting. Then the issue is probably not directly related 
to

keep-alive, it probably just triggered with a much higher likelihood.


Well, after spending some time compiling testing compiling testing I 
finally found that the patch 
0103-OPTIM-MEDIUM-epoll-fuse-active-events-into--1.5-dev19.diff done 
between 20131115 and 20131116 is causing my problems.


I also found that this problem is much easier to reproduce on Safari 
than on Firefox or Chrome.


The weird thing is that this commit has been reverted in dev21 but I 
still have the problem in dev21. So I am a bit confused


Greets,

Sander



RE: http-keep-alive broken?

2014-01-04 Thread Sander Klein

Hey,

On 03.01.2014 22:52, Lukas Tribus wrote:
You said that one of your backends is exchange 2012. What release are 
the
other ntlm-auth backends exactly and is the issue the same on all of 
them?


All backends are windows 2012 with the standard IIS that comes with it. 
I have the problem on all of them. But not always on the same time.


Greets,

Sander



RE: http-keep-alive broken?

2014-01-05 Thread Sander Klein

Hey,

On 05.01.2014 17:33, Lukas Tribus wrote:

Hi,


Well, after spending some time compiling testing compiling testing I
finally found that the patch
0103-OPTIM-MEDIUM-epoll-fuse-active-events-into--1.5-dev19.diff done
between 20131115 and 20131116 is causing my problems.

I also found that this problem is much easier to reproduce on Safari
than on Firefox or Chrome.


Ok. Can you try if disabling epoll works around this problem (noepoll 
in
the config or command-line argument -de [1]) to double check it has 
todo

with epoll?


Disabling epoll doesn't fix it... drat... Tested it with ss-20140104. 
Could it be that it's a more subtle bug somewhere else? The (reverted) 
epoll patch and some other patch currently included may make it easier 
to trigger?



The weird thing is that this commit has been reverted in dev21 but I
still have the problem in dev21. So I am a bit confused


No, dev-21 don't has this revert. dev-21 was released December 16th and
the offending commit 2f877304ef (from November 15th) was reverted via
commit 3ef5af3dcc on December 20th.


Sorry, that's actually what I meant.


Just to be on the safe side: could you download a clean and uptodate
snapshot haproxy-ss-20140104 [2], to avoid any missing patches?


Did that, with and without epoll enabled and it both fails.

So in the end, haproxy-ss-20131115 [3] works fine and 
haproxy-ss-20131116

[4] has this problem, correct?

I know this triple checking sucks, but what you are reporting doesn't 
make

sense because, like you said yourself, this was reverted.


No problem, check as much as you want. It sucks if I somehow push you 
guys in the wrong direction.


But, Yes, that is correct. 20131115 works and 2013116 doesn't. I tested 
it a couple of times. The bug is very, very subtle I just found out. 
When using OSX 10.9 with safari 7 it fails with for instance 20140104 
and 20131116 and works with 20131115. But, if I take an older 10.8 
machine with safari 6 it works with all versions.


I am losing my mind here ;-) I'm pretty sure I saw other 
platforms/browsers hang in the same way but that was all under load ~150 
people accessing there servers during office starting hours.


Greets,

Sander



Re: http-keep-alive broken?

2014-01-06 Thread Sander Klein

On 06.01.2014 15:10, Willy Tarreau wrote:
I would go even further (using git). What I understand here is that the 
issue
was introduced after the epoll optimization and is hidden by this one. 
So I'd
rather start by reverting that patch and then looking up for another 
faulty

patch after those :

  1) create a new branch called test1 starting at the first faulty 
commit :


 git checkout -b test1 2f877304

  2) apply the revert patch first :

 git cherry-pick 3ef5af3d

  3) OK now both the faulty patch and the revert are merged, it makes 
sense

 to confirm that the bug is still not there.

  4) now rebase all further patches on top of these ones : Git will 
re-apply
 all other patches after the ones above. You will thus have a 
working

 version to start from :

 git checkout -b test2 master
 git rebase test1

  5) ensure that branch test2 is wrong by doing a test

  6) bisect the code from test1 which was verified to be good at 3) and 
test2

 which was verified to be bad at 5) :

 git bisect start test2 test1

It will offer you another patch which introduced the regression hidden 
by the

one above.


I will do the bisect as soon as I have time.

On a side note, today I had an issue with another loadbalancer running 
ss-20140101 which showed almost the same behavior as the 'NTLM' bug I 
was having. (hanging connection, or waiting a lng time and the 
giving a corrupt file) This bug only happened with certain downloads 
(jpg's) with http compression enabled. If a browser requested the file 
without the compression header everything was fine.


Downgrading to dev19 also fixed this issue. I don't know if this could 
be related somehow.


Greets,

Sander



Re: http-keep-alive broken?

2014-01-09 Thread Sander Klein

Hi,

I'm sorry you haven't heard from me yet. But I didn't have time to look 
into this issue. Hope to do it this weekend.


Greets,

Sander



Re: http-keep-alive broken?

2014-01-10 Thread Sander Klein

Heyz,

On 10.01.2014 09:14, Willy Tarreau wrote:

Hi Sander,

On Fri, Jan 10, 2014 at 08:57:18AM +0100, Sander Klein wrote:

Hi,

I'm sorry you haven't heard from me yet. But I didn't have time to 
look

into this issue. Hope to do it this weekend.


Don't rush on it, Baptiste has reported to me a reproducible issue on 
his
lab which seems to match your problem, and which is caused by the way 
the

polling works right now (which is the reason why I want to address this
before the release). I'm currently working on it. The fix is far from 
being

trivial, but necessary.


Do you still want me to bisect? Or should I wait? If you think the 
problem is the same I'll just test the fix :-)


Sander



Support IP_FREEBIND

2014-03-03 Thread Sander Klein

Hi,

would it be possible to support IP_FREEBIND with HAProxy-1.5 on linux?

I'm asking because nonlocal_bind only works for IPv4 and it seems linux 
upstream does not want to support nonlocal_bind for IPv6.


A thread about this can be found here: 
http://comments.gmane.org/gmane.comp.web.haproxy/7317


Currently I'm binding IP's to a dummy interface so HAProxy can start, 
but this is starting to become a nightmare.


Greets,

Sander



Re: Support IP_FREEBIND

2014-03-03 Thread Sander Klein

On 03.03.2014 14:45, Sander Klein wrote:

Hi,

would it be possible to support IP_FREEBIND with HAProxy-1.5 on linux?

I'm asking because nonlocal_bind only works for IPv4 and it seems
linux upstream does not want to support nonlocal_bind for IPv6.

A thread about this can be found here:
http://comments.gmane.org/gmane.comp.web.haproxy/7317

Currently I'm binding IP's to a dummy interface so HAProxy can start,
but this is starting to become a nightmare.


Replying to myself... I'm probably looking for the 'transparant' option. 
Looking at the docs it seems to do what I want...


Greets,

Sander



Re: [PATCH] MINOR: set IP_FREEBIND on IPv6 sockets in transparent mode

2014-03-04 Thread Sander Klein

On 03.03.2014 21:31, Willy Tarreau wrote:

On Mon, Mar 03, 2014 at 09:10:51PM +0100, Lukas Tribus wrote:
Lets set IP_FREEBIND on IPv6 sockets as well, this works since Linux 
3.3

and doesn't require CAP_NET_ADMIN privileges (IPV6_TRANSPARENT does).

This allows unprivileged users to bind to non-local IPv6 addresses, 
which

can be useful when setting up the listening sockets or when connecting
to backend servers with a specific, non-local source IPv6 address (at 
that

point we usually dropped root privileges already).


Patch applied, thank you Lukas!


I will test the patch. Stupid question, but is it really supported from 
3.3 and higher? A quick test with dev22 yesterday seemed to be working 
but I didn't put any traffic through it. It was late so I didn't give it 
enough attention ;-)


Sander



Re: System tuning for Haproxy

2014-03-12 Thread Sander Klein

On 12.03.2014 10:36, William Lewis wrote:

Hi,

I’m looking for any advice in tuning kernel parameters for haproxy.

Current sysctl.conf is

net.ipv4.icmp_echo_ignore_broadcasts = 1
fs.file-max = 800
vm.swappiness = 20
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 32768
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.ip_local_port_range = 4096 65535
net.ipv4.tcp_sack = 0
net.ipv4.tcp_fack = 0
net.ipv4.tcp_timestamps = 0
net.core.rmem_default = 262144
net.core.rmem_max = 52428800
net.core.wmem_max = 52428800
net.core.somaxconn = 65535
net.ipv4.tcp_user_cwnd_max = 20
kernel.nmi_watchdog = 0
net.ipv4.igmp_max_memberships = 2000
net.ipv4.igmp_max_msf = 2000


I had a *lot* of troubles with enabling net.ipv4.tcp_tw_recycle. Under 
heavy load connections started breaking up. Setting it back to 0 fixed 
it. So you might want to be careful with that one.


Read 
http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html for 
more info on that.


Greets,

Sander



Re: Generating a haproxy cluster

2014-03-26 Thread Sander Klein

Hi

On 24.03.2014 18:35, Andy Walker wrote:

For what it's worth, haproxy can be running on a server, and listening
on IP addresses that aren't actually associated with that server. In
linux, just make sure NET.IPV4.IP_NONLOCAL_BIND is set to 1, and
this will allow haproxy to bind to addresses that aren't currently
associated with that server. This is handy for very basic HA solutions
like keepalived, where you may just want the HA service managing IPs,
and not necessarily turning on and off haproxy as well.


Just to add a little note to this, you could also use 'transparent' in 
your bind directive instead of the NET.IPV4.NONLOCAL_BIND setting. This 
way the config will work with both IPv4 and IPv6.


Regards,

Sander



Re: Generating a haproxy cluster

2014-03-26 Thread Sander Klein

Hey,

On 26.03.2014 12:17, Jarno Huuskonen wrote:

Hi,

On Wed, Mar 26, Sander Klein wrote:

Hi

On 24.03.2014 18:35, Andy Walker wrote:
For what it's worth, haproxy can be running on a server, and listening
on IP addresses that aren't actually associated with that server. In
linux, just make sure NET.IPV4.IP_NONLOCAL_BIND is set to 1, and
this will allow haproxy to bind to addresses that aren't currently
associated with that server. This is handy for very basic HA solutions
like keepalived, where you may just want the HA service managing IPs,
and not necessarily turning on and off haproxy as well.

Just to add a little note to this, you could also use 'transparent'
in your bind directive instead of the NET.IPV4.NONLOCAL_BIND
setting. This way the config will work with both IPv4 and IPv6.


Also one option could be to bind all addresses to 'lo' interface
(http://comments.gmane.org/gmane.comp.web.haproxy/7317)

(this seems to work for ipv6 addresses). I have something like
this in /etc/sysconfig/haproxy:

#
# UEF: add ipv6 addrs to lo
#
LOADDRS=('2001:xyz:xyz:xyz::bad:201/64' '2001:xyz:xyz:xyz::bad:202/64')
for addr in ${LOADDRS[@]}; do
/sbin/ip -6 addr show lo | /bin/grep -q ${addr}  /dev/null
if [ $? -ne 0 ]; then
/sbin/ip -6 addr add ${addr} dev lo
fi
done

(and keepalived manages/adds those addresses to ethX interface).


I had that indeed (since I'm in that thread you're referencing ;-) ) but 
sometimes ran into problem when using this in combination with 'vmac' on 
keepalived. The transparent option is much cleaner since it makes sure 
you don't run into sudden mac leakage or other self inflicted 
stupidities.


Greets,

Sander



CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

Hi,

I noticed a dramatic increase in CPU usage between HAProxy ss-20140329 
and ss-20140425. With the first haproxy uses around 20% of CPU and with 
the latter it eats up 80-90% of cpu and sites start to become sluggish. 
Health checks take much more time to complete 1100ms vs 2ms normal.


Nothing in my config has changed and when I downgrade everything returns 
to normal.


Info about haproxy:

haproxy -vvv
HA-Proxy version 1.5-dev23-3c1b5ec 2014/04/24
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 
USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 
200


Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.




Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

Hey Willy,

On 25.04.2014 14:39, Willy Tarreau wrote:

On Fri, Apr 25, 2014 at 02:12:23PM +0200, Sander Klein wrote:

Hi,

I noticed a dramatic increase in CPU usage between HAProxy ss-20140329
and ss-20140425. With the first haproxy uses around 20% of CPU and 
with
the latter it eats up 80-90% of cpu and sites start to become 
sluggish.

Health checks take much more time to complete 1100ms vs 2ms normal.

Nothing in my config has changed and when I downgrade everything 
returns

to normal.


I really don't like that at all :-(

I remember that you were running with gzip compression enabled in your 
config,

is that still the case ? It would be possible that in the past very few
responess were compressed due to the bug forcing us to disable 
compression
of chunked-encoded objects, and that now they're compressed and eat all 
the

CPU. In order to be sure about this, could you please try to disable
compression in your config just as a test ? Otherwise, despite the 
numerous

changes, I see very few candidates for such a behaviour :-/


I currently don't have compression enabled in my config. I disabled it 
some time ago because of CPU usage ;-)


With the current snapshot I do get some warnings:

[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:185] : 
a 'http-request' rule placed after a 'reqxxx' rule will still be 
processed before.
[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:240] : 
a 'block' rule placed after an 'http-request' rule will still be 
processed before.


But I don't suspect those are the issue.

Just to make sure I didn't give you a bogus report is 
upgraded/downgraded a couple of times, but every time I install 20140425 
the CPU spikes and sites become sluggish.


Care for my config?

Sander



Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

On 25.04.2014 15:46, Willy Tarreau wrote:

Just to make sure I didn't give you a bogus report is
upgraded/downgraded a couple of times, but every time I install 
20140425

the CPU spikes and sites become sluggish.


OK. Does it happen immediately or does it take some time ?


It happens immediately. It might take some time 0-10 seconds before I 
see the health checks jump in check time. Every time it's another check 
that spikes in time. It's not like they are all continuously high with 
latency.





Care for my config?


Sure!


In a separate mail ;-)

Greets,

Sander



Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

On 25.04.2014 15:46, Willy Tarreau wrote:

On Fri, Apr 25, 2014 at 03:34:14PM +0200, Sander Klein wrote:

I currently don't have compression enabled in my config. I disabled it
some time ago because of CPU usage ;-)


Ah too bad, it would have been an easy solution!


With the current snapshot I do get some warnings:

[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:185] 
:

a 'http-request' rule placed after a 'reqxxx' rule will still be
processed before.
[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:240] 
:

a 'block' rule placed after an 'http-request' rule will still be
processed before.

But I don't suspect those are the issue.


No they're unrelated, the check was not made in the past and could
lead to possibly erroneous configs, so now the warning tells you
how your current configuration is understood and used (hint: just
move your reqxxx rules *after* http-request rules, then move the
block rules before http-request and the warning will go away).


Just to make sure I didn't give you a bogus report is
upgraded/downgraded a couple of times, but every time I install 
20140425

the CPU spikes and sites become sluggish.


OK. Does it happen immediately or does it take some time ?


Care for my config?


Sure!


I've done a search and it breaks between 20140413 and 20140415.

Greets,

Sander



Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

On 25.04.2014 17:22, Willy Tarreau wrote:

On Fri, Apr 25, 2014 at 04:56:06PM +0200, Sander Klein wrote:

I've done a search and it breaks between 20140413 and 20140415.


OK, that's already very useful. I'm assuming this covers the period
between commits 01193d6ef and d988f2158. During this period, here's
what changed that could possibly affect your usage, even if unlikely :

  - replacements for sprintf() using snprintf() : it would be possible
that some of them would be mis-computed and result in a wrong size
causing something to loop over and over. At first glance it does
not look like this but it could be ;

  - getaddrinfo() is used by default now and you have the build option 
for
it. Your servers are referenced by their IP addresses so I don't 
see

why that could fail. Still it's possible to disable this by setting
the global statement nogetaddrinfo in the global section if you 
want

to test. It's highly unlikely that it could be related but it could
trigger a corner case bug somewhere else.

  - ssl: Add standardized DH parameters = 1024 bits
(I still don't understand what this is about, I'm clearly far from
being even an SSL novice). I have no idea whether it can be related
or not, but at least you're using SSL so everything is possible.

  - fix conversion of ipv4 to ipv6 in stick tables : you don't have 
any.


  - language converter : you don't have it

  - unique-id : you don't have it

  - crash on sc_tracked : you don't use it.

Thus given your setup, I'd start with the think I understand the least
which is the SSL change. Could you please revert the attached patch
by applying it with patch -Rp1 ?



Well, I can confirm that reverting that patch fixes my issue. Got 
20140415 running now and CPU usage is normal.


Greets,

Sander



RE: CPU increase between ss-20140329 and ss-20140425

2014-04-26 Thread Sander Klein

Hey All,

Sorry for my late response, but we have a national holiday here... 
'Kings day' would be the translation ;-)


On 26.04.2014 13:53, Lukas Tribus wrote:

Hi,



- recommit the patch I submitted as it is, and let users concerned 
with

the CPU impact use static DH parameter in the certificate file.


What do you mean by use static DH parameter in the cert file ? Is 
this
something the user can decide after the cert is emitted ? Is it 
something

easy to do ?


Yes, Emeric's hard-coded dhparams or Remi's automated dhparams are only 
a

fallback in case the crt file doesn't contain dhparams.

The file needs to look like:
crt /path/to/cert+privkey+intermediate+dhparam

Whereas the dhparam are simply the result of:
 openssl dhparam 1024/2048/...


Also, one important thing to understand here is that this matters only 
with
*_DHE_* cihpers. Its not used with legacy non-PFS RSA cihpers or with 
ECDHE

ciphers.

For example not a single browser uses _DHE_ ciphers on demo.1wt.eu [2], 
so
the problem would never show (unless an attackers uses DHE deliberately 
to

saturate the servers CPU).


Sander, can you tell us your exact cipher configuration? It may be
suboptimal. I would recommend the configuration from [3]. Do you
have a lot of Java 6 clients connecting to this service btw?


My cipher config is:

ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

I've disabled sslv3 and use certificates with 4096bits keys. I know 4096 
bits keys are a bit over the top, but while testing the impact seemed to 
be acceptible so I thought 'What the heck, let's just use it'


I'll have a look at the recommended config from [3].

I don't think there are a lot of java clients connecting. We do expose 
some api's which might be accessed by java clients, but that wouldn't be 
more than 1% of the clients.



Also check if tls-tickets and ssl-session caching works correctly.


ssllabs says ssl resumption (caching) and ssl resumption (tickets) are 
working.


Greets,

Sander



RE: CPU increase between ss-20140329 and ss-20140425

2014-04-26 Thread Sander Klein

On 26.04.2014 16:07, Lukas Tribus wrote:

Hi,


I've disabled sslv3 and use certificates with 4096bits keys. I know 
4096
bits keys are a bit over the top, but while testing the impact seemed 
to

be acceptable so I thought 'What the heck, let's just use it'


Thats it, with Remi's patch your dhparam was upgraded to 4096bit, we
assumed they have been upgraded to 2048bit only.

DHE with 4096bit keys and dhparam will clearly kill performance.


Drat, so my nice labtest with haproxy and different key sizes was 
completely useless :-) It does explain why I didn't understand the 
problem with 4096bit keys.


Sander



RE: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Sander Klein

On 02.05.2014 16:52, Lukas Tribus wrote:

Hi Remi,




The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).



Please note that Sander used 4096bit - which is why he saw huge CPE 
load.


Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 
2048bit

dhparam manually (in the cert file).


I'll try to test around a bit this weekend.

Sander



RE: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-05 Thread Sander Klein

On 02.05.2014 16:52, Lukas Tribus wrote:

Hi Remi,




The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).



Please note that Sander used 4096bit - which is why he saw huge CPE 
load.


Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 
2048bit

dhparam manually (in the cert file).



I've added a 2048bit dhparam to my most used certificates and I don't 
see a big jump in resource usage.


This was not a big scientific test, I just added the DH params in my 
production and looked if the haproxy process started eating more CPU. As 
far as I can tell CPU usage went up just a couple percent. Not a very 
big deal.


So, to me using 2048bit doesn't seem like a problem. And. I can 
always switch to nbproc  1 ;-)


Greets,

Sander



Re: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-19 Thread Sander Klein

On 19.05.2014 06:51, Willy Tarreau wrote:

Hi Rémi,

On Mon, May 12, 2014 at 06:34:01PM +0200, Remi Gacogne wrote:

Hi,

On 05/05/2014 12:06 PM, Sander Klein wrote:

 I've added a 2048bit dhparam to my most used certificates and I don't
 see a big jump in resource usage.

 This was not a big scientific test, I just added the DH params in my
 production and looked if the haproxy process started eating more CPU. As
 far as I can tell CPU usage went up just a couple percent. Not a very
 big deal.

 So, to me using 2048bit doesn't seem like a problem. And. I can
 always switch to nbproc  1 ;-)

Thank you Sander for taking the time to do this test! I am still not
sure it is a good idea to move a default of 2048 bits though.

Here is a new version of the previous patch that should not require
OpenSSL 0.9.8a to build, but instead includes the needed primes from
rfc2409 and rfc3526 if OpenSSL does not provide them. I have to admit 
I

don't have access to an host with an old enough OpenSSL to test it
correctly. It still defaults to use 1024 bits DHE parameters in order
not to break anything.

Willy, do you have any thoughts about this patch or any other way to
simplify the use of stronger DHE parameters in haproxy 1.5? I know it
can already be done by generating static DH parameters, but I am 
afraid

most administrators may find it too complicated and therefore not dare
to test it.


I'd have applied a very simple change to your patch : I'd have 
initialized
global.tune.ssl_max_dh_param to zero by default, and emitted a warning 
here :


+   if (global.tune.ssl_max_dh_param = 1024) {
+   /* we are limited to DH parameter of 1024 bits 
anyway */

+   Warning(Setting global.tune.ssl_max_dh_param
to 1024 by default, if your workload permits it you should set it to
at least 2048. Please set a value = 1024 to make this warning
disappear.);
+   global.tune.ssl_max_dh_param = 1024;
+   dh = ssl_get_dh_1024();
+   if (dh == NULL)
+   goto end;

What do you think ? That way it seems like only people really using the 
default

value will get the warning.


What happens if you also have DH appended to your certificates? You set 
global.tune.ssl_max_dh_param to 1024 but you have a 4096bit DH in your 
certificate file, which one is used then? An answer could be 'Don't do 
that' :-) but I was curious.


Greets,

Sander



Re: [ANNOUNCE] haproxy-1.5.0

2014-06-20 Thread Sander Klein

On 19.06.2014 21:54, Willy Tarreau wrote:

Hi everyone,

The list has been unusually silent today, just as if everyone was 
waiting

for something to happen :-)

Today is a great day, the reward of 4 years of hard work. I'm 
announcing the

release of HAProxy 1.5.0.


Congratulations!

Now people can finally stop bugging me about using dev versions in 
production, lets upgrade to 1.6-dev0 ;-)


Sander



Re: Just had a thought about the poodle issue....

2014-10-20 Thread Sander Klein

On 18.10.2014 16:37, David Coulson wrote:

You mean like this?

http://blog.haproxy.com/2014/10/15/haproxy-and-sslv3-poodle-vulnerability/


On 10/18/14, 10:34 AM, Malcolm Turnbull wrote:
I was thinking Haproxy could be used to block any non-TLS 
connection

Like you can with iptables:
https://blog.g3rt.nl/take-down-sslv3-using-iptables.html

However it would be nice if you had users trying to connect via IE6/7
etc on XP to display a nice message like, please upgrade to a secure
browser chrome or firefox etc?

Is that easy to do?


Is something like this also possible with SNI or strict-SNI enabled? I 
would like to issue a message when a browser doesn't support SNI.


Sander



Regex

2014-12-01 Thread Sander Klein

Hi,

I'm testing some stuff with quite a big regex and now I am wondering 
what would be more efficient. Is it more efficient to load the regex 
with -i or is it better to specify it in the regex


So,

-i (some|words)

or

((S|s)(O|o)(M|m)(E|e)|(W|w)(O|o)(R|r)(D|d)(S|s))

Greets,

Sander



Re: Help haproxy

2015-02-02 Thread Sander Klein

On 02.02.2015 12:09, Mathieu Sergent wrote:

Hi,

I try to set up a load balancing with HAProxy and 3 web servers.
I want to receive on my web servers the address' client.
I read that it is possible with the option  source ip usesrc   but
you need to be root.
If you want to not be root, you have to used  HAProxy with Tproxy.
But Tproxy demand too much system configuration.
There is an other solution ?
I hope that you have understood my problem.

Yours sincerely.

Mathieu Sergent

PS : Sorry for my English.


Your English is no problem. ;-)

You can add an X-Forwarded-For header using haproxy. If you then use 
mod_rpaf for apache or realip on nginx you can easily substitute the 
loadbalancer ip with the ip of the client.


Regards,

Sander




Re: Help haproxy

2015-02-02 Thread Sander Klein

Hi Mathieu,

Pleas keep the list in the CC.

On 02.02.2015 15:26, Mathieu Sergent wrote:

Thanks for your reply.

I just used the option forwardfor in the haproxy configuration. And i
can find client's address from my web server (with tcpdump).
But if i don't use the option forwardfor, the web server still find
the client's address. That's make any sense ?


To be honest, that doesn't make any sense to me. Are you sure you have 
reloaded the haproxy process after you removed the forwardfor?


Or, could it be you are using the proxy protocol (send-proxy)?

Greets,

Sander



Re: Help haproxy

2015-02-02 Thread Sander Klein

On 02.02.2015 16:33, Mathieu Sergent wrote:

Hi Sander,

Yes i reloaded the haproxy and my web server too. But no change.
 And i'm not using proxy protocol.

To give you more precisions, on my web server i used tcpdump functions
which give me back the header of the requete http. And in this i found
my client's address.
But this is really strange that i can do it without the forwardfor.


The only other thing that I can think of is that your client is behind a 
proxy server which adds the X-Forward-For header for you...


Or you got something strange in your config...

Sander



Re: Serveur Haproxy

2015-01-20 Thread Sander Klein

On 20.01.2015 10:54, andriatsiresy johary wrote:

J'ai mis en place un système de load balancing d'un cluster de base
de données, avec HAProxy, sur une debian 7, j'ai activer la page de
statistique de HAProxy et je ne sais pas ou trouver le code source de
ce page, pourriez-vous m'aider s'il vous plait. Merci


De statistics pagina word gegenereerd door het haproxy proces zelf. Er 
is dus niet een html bestand die elke keer ge-update word.


Of zoek je echt de plek binnen de haproxy source code waar deze pagina 
opgebouwd word?


Sander



Re: HA proxy configuartion

2015-05-04 Thread Sander Klein

On 2015-05-04 07:35, ANISH S IYER wrote:

Hi

while configuring Ha proxy.

mv /etc/haproxy/haproxy.cfg{,.original}

what is the meaning of this line. what you mean by original


It will move the file haproxy.cfg to haproxy.cfg.original. So, it is the 
same as mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.original


Sander



Re: HA proxy configuartion

2015-05-04 Thread Sander Klein

Hey,

please keep it on the list...

On 2015-05-04 10:19, ANISH S IYER wrote:

Hi
thanks for your fast replay

after configuring the HA proxy

the log file seems like

May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_in started.
May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_in started.
May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_http started.
May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_http started.
May  4 03:42:00 discourse haproxy[3590]: Proxy admin started.
May  4 03:42:00 discourse haproxy[3590]: Proxy admin started.
May  4 03:42:00 discourse haproxy[3590]: Server haproxy_http/apache is
DOWN, reason: Layer4 connection problem, info: Connection refused,
check duration: 0ms. 0 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.
May  4 03:42:00 discourse haproxy[3590]: Server haproxy_http/apache is
DOWN, reason: Layer4 connection problem, info: Connection refused,
check duration: 0ms. 0 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.
May  4 03:42:00 discourse haproxy[3590]: backend haproxy_http has no
server available!
May  4 03:42:00 discourse haproxy[3590]: backend haproxy_http has no
server available!


The problem appears to be this:

May  4 03:42:00 discourse haproxy[3590]: Server haproxy_http/apache is
DOWN, reason: Layer4 connection problem, info: Connection refused,
check duration: 0ms. 0 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.

Haproxy cannot connect to your backend servers. Maybe you are using the 
wrong ip/port or some firewall is bugging you.


Sander



Re: Question regarding haproxy nagios setup

2015-06-19 Thread Sander Klein

On 2015-06-19 16:08, Mauricio Aguilera wrote:

El problema es por el ; antes del csv de la url

Tengo el mismo problema y pude detectar que
Nagios corta ahí el comando y
obviamente se ejecuta mal,
intenté pasarle los valores con   y ' ', pero nada...

Se les ocurre algo?


Me gustaría tratar de hacer esta pregunta de nuevo en Inglés. Dado que 
la mayoría de la población mundial no habla español.





Microsoft Edge 408

2015-09-24 Thread Sander Klein

Hi,

I have some clients that complain about getting 408 errors with 
Microsoft Edge. I haven't been able to catch such a request yet, but I 
am wondering if this is the same as the Google Chrome preconnect 
problem.


Anyone by any chance got the same experience or any ideas on this?

Greets,

Sander



Re: ssl parameters ignored

2015-11-24 Thread Sander Klein

Hi Nenad,

On 2015-11-24 16:15, Nenad Merdanovic wrote:

Can you post a minimal configuration (or full) which reproduces this?


Yes, here it is:

global
log /dev/loglocal0
log /dev/loglocal1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
#ca-base /etc/ssl/certs
#crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
	ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-server-verify none
tune.ssl.default-dh-param 4096

defaults
log global
modehttp
option  httplog
option  dontlognull
timeout connect 5000
timeout client  5
timeout server  5
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend web
bind x.x.x.x:80
bind x.x.x.x:443 ssl crt /etc/haproxy/SSL/ strict-sni
bind x:x:x::x:80
bind x:x:x::x:443 ssl crt /etc/haproxy/SSL/

mode http
maxconn 4096

option httplog
option splice-auto

capture request header Host len 64
capture request header User-Agent   len 16
capture request header Content-Length   len 10
capture request header Referer  len 256
capture response header Content-Length  len 10

use_backend nginx

backend nginx
fullconn 128
mode http

option abortonclose
option http-keep-alive

server nginx 127.0.0.1:443 ssl cookie nginx send-proxy



ssl parameters ignored

2015-11-23 Thread Sander Klein

Hi All,

I'm running haproxy 1.6.2 and it seems it ignores the values given with 
ssl-default-bind-options and/or ssl-default-server-options.


I have the following in my global conf:

ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS



When testing this config I get:

[ALERT] 326/202736 (24201) : SSLv3 support requested but unavailable.
Configuration file is valid

After testing with ssllabs I also noticed tlsv10 and tlsv11 were still 
enabled. Downgrading to haproxy 1.5.14 removes the error when testing 
the config and shows the tls protocols as disabled when using ssllabs.


Did something change betweern 1.5 and 1.6 so my config doesn't work 
anymore?


Greets,

Sander



  1   2   >