Re: haproxy 1.5.4 generating badreq 408's

2014-11-25 Thread Guillaume Castagnino
Hi,

Le mardi 25 novembre 2014 13:14:45 Klavs Klavsen a écrit :
 out haproxy config:
 defaults
log  global
maxconn  8000
option  redispatch
retries  3
stats  enable
timeout  http-request 10s
timeout  queue 1m
timeout  connect 10s
timeout  client 1m
timeout  server 1m
timeout  check 10s
 
 frontend pbutik
 [...]
timeout client  30
 [...]

Look at this timeout ;)
30ms timeout is quite short don’t you think ?

Regards

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




http-send-name-header buffer corruption (1.4.25)

2014-09-04 Thread Guillaume Castagnino
Hi,

It seems that there is an issue with http-send-name-header. I can 
reproduce with 1.4.25, but not 1.5.

Sometime it works as expected, and sometime the Header is corrupted


The conf:
listen 6c353edc-32b2-11e4-abba-0800272e3d2e 127.0.0.100:10002
   balance roundrobin
   mode http
   timeout connect 3
   timeout client 3
   timeout server 3
   option http-server-close
   option redispatch
   reqidel ^Host:
   http-send-name-header Host
   option httpchk HEAD / HTTP/1.0\r\nHost:\ 127.0.0.1
   stats enable
   stats uri /haproxy?stats
   server 192.168.56.1:81 192.168.56.1:81cookie SLB1   weight 100 
maxconn 1000   check inter 2000 fastinter 1000 downinter 5000 fall 2 
rise 3
   server 192.168.56.2:81 192.168.56.1:82cookie SLB2   weight 100 
maxconn 1000   check inter 2000 fastinter 1000 downinter 5000 fall 2 
rise 3





When the header is corrupted, the HTTP request looks like this (tcpdump 
captured). Look at the wonderfull Host header:
GET / HTTP/1.1
Host: 1GET / HTTP/1.1
User-Agent: curl/7.37.1
Accept: */*
X-Forwarded-For: 192.168.56.1
X-Forwarded-Host: daos-dev
X-Forwarded-Server: daos-dev
Connection: close



The corruption is cyclic: I have 3 request OK, then 3 corrupted, then 3 
OK again, then 3 failed, etc…


Thank’s for your attention and the wonderfull product !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




forward backend response instead of 502

2014-07-02 Thread Guillaume Castagnino
Hi all,

I’m currently facing an issue and I do not figure how to workaround it.

- Some big picture:
I have a backend that receive file uploads. It checks the upload size 
and if the maximum upload file size is reached, send immediately a 413 
request entity too long with a connection: close header and close the 
connection (to not wait for the end of the upload, it’s useless to 
consume bandwidth with discarded data…).
Moreover It looks like it’s valid regarding the RFC2616 8.2.2 (a SHOULD, 
not a MUST)

- The problem:
There is obviously an early close on backend size. Thus haproxy seems to 
consider this as a truncated response and issue a 502 error (as stated 
in doc).
But I would like it to transmit the 413 error from the backend… Do you 
know a way to do this ? Maybe I missed an option in the doc for that, 
but currently I have no idea to workaround this.

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: forward backend response instead of 502

2014-07-02 Thread Guillaume Castagnino
Le mercredi 02 juillet 2014 10:45:57 Guillaume Castagnino a écrit :
 Hi all,
 
 I’m currently facing an issue and I do not figure how to workaround
 it.
 
 - Some big picture:
 I have a backend that receive file uploads. It checks the upload size
 and if the maximum upload file size is reached, send immediately a
 413 request entity too long with a connection: close header and
 close the connection (to not wait for the end of the upload, it’s
 useless to consume bandwidth with discarded data…).
 Moreover It looks like it’s valid regarding the RFC2616 8.2.2 (a
 SHOULD, not a MUST)
 
 - The problem:
 There is obviously an early close on backend size. Thus haproxy seems
 to consider this as a truncated response and issue a 502 error (as
 stated in doc).
 But I would like it to transmit the 413 error from the backend… Do you
 know a way to do this ? Maybe I missed an option in the doc for that,
 but currently I have no idea to workaround this.
 
 Thanks !

I forgot to mention: using haproxy 1.4.25 :)

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: forward backend response instead of 502

2014-07-02 Thread Guillaume Castagnino
Le mercredi 02 juillet 2014 11:45:25 Lukas Tribus a écrit :
 Hi Guillaume,
 
  Hi all,
  
  I’m currently facing an issue and I do not figure how to workaround
  it.
  
  - Some big picture:
  I have a backend that receive file uploads. It checks the upload
  size
  and if the maximum upload file size is reached, send immediately a
  413 request entity too long with a connection: close header and
  close the connection (to not wait for the end of the upload, it’s
  useless to consume bandwidth with discarded data…).
  Moreover It looks like it’s valid regarding the RFC2616 8.2.2 (a
  SHOULD, not a MUST)
  
  - The problem:
  There is obviously an early close on backend size. Thus haproxy
  seems to consider this as a truncated response and issue a 502
  error (as stated in doc).
 
 Could you issue a show errors [1] on the admin socket [2] and post
 the output?
 

Unfortunately it reports nothing.
# echo show errors | socat - /tmp/haproxy.sock
Total events captured on [02/Jul/2014:11:58:08.943] : 0


I did some more tests, and it’s really due to the early socket closing 
from the backend server:
- When the backend answers 413 immediatly as the size limit is reached 
and close the socket, haproxy give a 502 to the client.
- When the backend answers 413 immediatly as the size limit is reached 
but continue to slurp data from haproxy until the end of the upload 
request, and closes the socket only when the request is completed, the 
413 error is correctly forwarded




I made a small quick and dirty TCP server that mimic this behaviour to 
use as a backend (see attached).
Then I send posts like this:
curl -H Expect: -F file=@big-file -v http://haproxy-ip/

HTTP answer is always issued after reading the first 1024B from haproxy.
With no option to the perl script backend, haproxy request is 100% 
slurped = haproxy answers 413 as expected.
With --early option, the backend closes the socket after issuing the 
HTTP response, so request from haproxy is abruptly interrupted = 502

The real server has the behaviour of my test with --early option, and I 
cannot change that (and it seems to be conforming to the RFC). But I 
would like to get the 413 error page issued from the backend, not the 
502 from haproxy. And I see no option in haproxy to forward the error 
page instead of the 413.


Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org


early-answer-poc.pl
Description: Perl program


Re: forward backend response instead of 502

2014-07-02 Thread Guillaume Castagnino
Le mercredi 02 juillet 2014 18:56:48 Willy Tarreau a écrit :
 Hi guys,
 
 On Wed, Jul 02, 2014 at 05:19:20PM +0200, Guillaume Castagnino wrote:
  Le mercredi 02 juillet 2014 16:53:06 Lukas Tribus a écrit :
   Hi Guillaume,
   
I made a small quick and dirty TCP server that mimic this
behaviour
to use as a backend (see attached).
Then I send posts like this:
curl -H Expect: -F file=@big-file -v http://haproxy-ip/
   
   Thanks, but it works for me (tm). In both latest (git) and 1.4.25
   curl 
   sees the 413 response:
  Not here :(
  With local network, I need to use quite big uploads (I use a 50MB
  file to be sure) because haproxy may have the time to send the
  whole file before the backend closes the socket (and in this case,
  the phenomenon is of course hidden). On my test machine, haproxy
  has the time to send 170kB before being interrupted because of the
  socket closing. As you can see in the pcap file attached.
 
 I know what's happening, welcome to HTTP over TCP, which are not
 compatible :-)
 
 Seriously speaking, what's happening here is that the server sends the
 413 and does not drain the incoming data, so the TCP stack sees a
 close with pending input data and immediately flushes the outgoing
 queue and emits a TCP reset instead of the 413 that you hoped was
 pending. This TCP reset is received by haproxy which never has any
 chance to get the 413.
 
 The only solution here is for the server to continue to read the
 incoming data until the client (here haproxy) at least receives the
 413. In practice, since TCP stacks rarely offer the option to verify
 the outgoing queue (at least in a portable way), servers have to
 drain as much as possible before closing, hoping it will leave enough
 time to the client to get the data. Many products including haproxy
 do that nowadays, at least to have a chance to correctly perform
 redirects on POSTs.
 
 I'd suggest you run an strace on haproxy and you'll see an ECONNRESET
 on the recv() without ever a sign of 413. Tcpdump will happily show
 you the RST from the server. That's why I was saying that HTTP is
 incompatible with TCP by design.
 
 The fix is only on the server side unfortunately here. Note, if you
 put haproxy in front of the server and use a unix socket instead of a
 TCP socket to connect to the server, it should work since there's no
 reset in this case on unix sockets, so haproxy will receive the 413,
 will be able to deliver it to the external client over TCP and drain
 as much as it can of its uploaded data. That *may* work, but it's not
 guaranteed.
 
 Hoping this helps,
 Willy

Thank you very much,

I will play with strace tomorrow arround this ECONNRESET. Too bad I 
cannot workaround on haproxy side, it will be complicated to fix the 
backend :(

Anyway, thanks a lot !!

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: haproxy dev21 high cpu usage

2013-12-17 Thread Guillaume Castagnino
Le mardi 17 décembre 2013 10:32:30 Sander Klein a écrit :
 Hi,
 
 I've enabled http-keep-alive in my config and now haproxy continuously
 peaks at 100% CPU usage where without http-keep-alive it only uses
 10-13% CPU.
 
 Is this normal/expected behavior?

Hi,

Indeed, I can confirm this behaviour when enabling server-side 
keepalive.

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: further tweaking SSL score on the SSL LABS test

2013-06-20 Thread Guillaume Castagnino
Hi,

Getting the highest score is not doable in  the real life.
It would need to :
- disable all but TLS 1.2 (and forget more or less all current browsers)
- use a =4096 bits key (and thanks to your CPU power and bandwidth)
etc...

The score is explained here : 
https://www.ssllabs.com/projects/rating-guide/index.html
You cannot be top-score and real life compliant at the same time, you 
have to make some choices.

Regards

Le jeudi 20 juin 2013 18:20:02 shouldbe q931 a écrit :
 Hi All,
 
 I had an itch, the itch was that I could get a better score on the
 SSL LABS test with IIS 7.5 than I could with HAProxy terminating SSL
 
 With
 ciphers RC4:HIGH:!aNULL:!MD5
 I would get
 Certificate 100
 Protocol Support 90
 Key Exchange 80
 Cipher Strength 90
 
 With IIS I could get
 Certificate 100
 Protocol Support 90
 Key Exchange 90
 Cipher Strength 90
 
 After much use of Google I have now changed to
 ciphers RC4-SHA:AES128-SHA:AES256-SHA
 and get
 Certificate 100
 Protocol Support 90
 Key Exchange 90
 Cipher Strength 90
 
 However I wonder if anyone else can either improve on the score, or
 keep the same score while improving the number of Cipher Suites.
 
 Cheers
 
 Arne
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: failing to redirect http to https using HAProxy 1.5dev15

2013-02-07 Thread Guillaume Castagnino
Hi,

You should consider to use the brand new redirect just ment for that :

redirect scheme https code 301 if ! secure



Regards


Le jeudi 07 février 2013 11:38:34 Robbert van Waveren a écrit :
 Hi,
 
 I'm trying out HAProxy and would like to use as our general purpose
 proxy/loadbalancer.
 Currently I've all requirements tackled except forcing the use of
 https (by means of a redirection).
 We're planning host many different sub domains so I really need to
 redirect to be as generic as possible.
 
 On the web I found a solution proposal using reqirep to rewrite the
 host header to https and then a generic redirect.
 However I can't get it to work as the redirect seems to fail to add
 the protocol+host part in the redirect url.
 (Leading to a redirect loop)
 
 Below is a simplified configuration that I'm currently trying to get
 working.
 Note that I´m using HAProxy itself to deal with SSL termination.
 
 global
 maxconn 4096
 daemon
 nbproc  2
 defaults
 clitimeout  6
 srvtimeout  3
 contimeout  4000
 modehttp
 
 frontend fe_default
   bind :443 ssl crt /opt/haproxy/ppc.pem crt /opt/haproxy/keystore/
   bind :80
   acl secure dst_port 443
   reqirep ^Host:[\ ]*\(.*\)  Host:\ https://\1 if ! secure
   redirect prefix / if ! secure
   default_backend be_default
 
 backend be_default
   balance roundrobin
   option httpchk
   cookie srv insert postonly indirect
   server civ1 10.2.32.175:443 weight 1 maxconn 512 check cookie one
   server civ2 10.2.32.176:443 weight 1 maxconn 512 check cookie two
 
 
 Any help is much appreciated.
 
 Regards,
 
 Robbert
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: IPv6 bind

2012-11-24 Thread Guillaume Castagnino
Hi,

Thanks a lot, this is working perfectly fine :)

Le samedi 24 novembre 2012 12:30:38 Willy Tarreau a écrit :
 Hi Vincent,
 
 I'm cc-ing Marcus Rueckert who first asked me for the feature.
 
 On Sat, Nov 24, 2012 at 12:07:23PM +0100, Vincent Bernat wrote:
  Hi Willy!
  
  Since it was an easy one, I have sent you a proposal.
 
 Grrr... I just did it too a few minutes ago, sorry for that :-/
 
  The difficulty is
  to agree on the default behavior. In my patch, I propose an option
  which enables v6 only when present and v4 and v6 when absent.
  Other 
  possibilities are :
   - v6only and v4v6 options which override system defaults and we
   
 keep system defaults if we don't have any keyword. A
 configuration
 working on distribution X won't work on distribution Y.
 
 That's what I've done too. Remember that we don't want to break
 existing setups, so it is out of question to suddenly change the way
 configs have been working for years.
 
   - v4v6 option and when absent, bind on IPv6 only.
  
  I like the later option better but this is the opposite of what we
  have now. I feel this is risky to let users upgrade and have a V6
  only server while they expected to have a V4+V6 server. By doing
  v4+v6 by default, we break setups relying on system-wide default of
  v6only but this will be a visible change (HAproxy won't be able to
  bind the socket).
 I really want to let the system-wide configuration decide when no
 option is set, that's the philosophy we've always followed. We add
 options to force a desired behaviour and without any option, the
 system sets defaults.
  However, I will be happy to update the patch to have v4v6 keyword
  instead of v6only.
 
 I did not know it was possible to revert the system behaviour, so yes
 please feel free to send such a patch to let the user force
 IPV6_V6ONLY to zero ! v4v6 seems appropriate to me too.
 
 Thanks,
 Willy
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




frontend configuration

2012-11-23 Thread Guillaume Castagnino
Hi,

I certainly missed something, but... On http://demo.1wt.eu/, you have a 
split on ipv4/ipv6/local on the frontend.
This is nice to gather some traffic stats instead of parsing traffic 
logs. But I cannot get the same thing in my configuration. I though 
there was a line per bind, but even if I use several binds splitting 
'::' into explicit v4 and v6 binds, I do not get this.
And I found nothing in the doc about this, but I'm probably searching 
the wrong keywords.

So how do you configure haproxy to have those lines in the frontend ?


Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: frontend configuration

2012-11-23 Thread Guillaume Castagnino
Le vendredi 23 novembre 2012 14:13:40 Baptiste a écrit :
 Hi Guillaume,
 
 In your ft configuration, just add the directive option
 socket-stats.

Great, this is the option I missed, thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




IPv6 bind

2012-11-23 Thread Guillaume Castagnino
Hi,

I have one more friday's dumb question :)
Is there a way (other than sysctl -w net.ipv6.bindv6only=1) to make 
the :: bind only ipv6 and not map ipv4 adresses ?
Something like the ipv6only=on from nginx ?

The goal would be to have separate sockets for '*' (v4) and '::' (v6), 
keeping the wildcards, and stop having v4 mapped addresses instead of 
plain ipv4 in http logs.

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




ACL issue with current HEAD ?

2012-11-07 Thread Guillaume Castagnino
Hi,

I just updated my haproxy to the current HEAD 
(08289f12f9a13ea06cf4a16a1211e82e003af218).
I now have acl issues: the hdr_dom matching seems to be ignored. This 
was working perfectly fine with the previous build I used 
(1bc4aab2902d732530ccbd098d30e519aab3abdd)

The configuration is quite simple and basic here. See attached.
I should see the stats page from https://haproxy.xwing.info/, but... not 
anymore with this new build, and the request is passed to the backend.

Did I miss something ?
Nota: I did not yet started to bissect. I will do it later if it helps.

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org
global
log 127.0.0.1   local0
maxconn 2000
userhaproxy
group   haproxy
daemon
stats   socket  /var/run/haproxy.sock level admin mode 600
stats   timeout 1d
#debug
#quiet

defaults
log global
option  dontlognull
retries 3
option  redispatch
option  splice-auto
maxconn 2000
timeout connect 3s
timeout client 5s
timeout server 60s
timeout queue 30s
timeout tarpit 30s
timeout http-request 3s


# Backends #


# all the vhosts are here
backend back-http
balance roundrobin
modehttp
option  http-server-close
option  abortonclose
option  forwardfor header X-Client
option  httpchk HEAD /server-status HTTP/1.0
cookie  SERVERID insert nocache indirect
server  coruscant 127.0.0.1:8080 maxconn 100 cookie pool1 check inter 
5000 rise 2 fall 2

# dev debian virtual machine
backend back-dev-debian
balance roundrobin
modehttp
option  http-server-close
option  abortonclose
option  forwardfor header X-Client
option  httpchk HEAD / HTTP/1.0
cookie  SERVERID insert nocache indirect
server  dev-debian dev.castagnino.org:80 maxconn 50 cookie pool1 check 
inter 5000 rise 2 fall 2

backend back-stats
modehttp
stats   uri /
stats   auth :

#
# Frontends #
#

# the plain http frontend. Do content switching between dev backend and 
redirector backend
frontend front-webapp
bind:::80
modehttp
option  httplog
acl dev-debian-vhost hdr_dom(Host) -i dev.castagnino.org 
www.pirouette-et-compagnie.com fif-dev prestashop
# ssl upgrade
redirectscheme https code 301 unless dev-debian-vhost
# switch backend
use_backend back-dev-debian if dev-debian-vhost

# the https frontend
frontend front-webapp-ssl
bind:::443 ssl crt /etc/ssl/startssl/haproxy/xwing.info.pem 
crt /etc/ssl/startssl/haproxy/ ecdhe prime256v1 ciphers 
ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH
modehttp
option  httplog
acl front-webapp-dead nbsrv(back-http) eq 0
acl stats-vhost hdr_dom(Host) -i haproxy.xwing.info
monitor-uri /status
monitor fail if front-webapp-dead
# prevent clickjacking
rspadd  X-Frame-Options:\ SAMEORIGIN
# full https = do STS
rspadd  Strict-Transport-Security:\ max-age=31536000
# switch backend
use_backend back-stats if stats-vhost
default_backend back-http

# vim: ft=haproxy


Re: ACL issue with current HEAD ?

2012-11-07 Thread Guillaume Castagnino
Argh sorry for the noise.

I was bisecting and... I cannot reproduce anymore this issue!
I do not understand why...

I have nothing special in the logs. I only saw the request passed to the 
backend, no error shown.
Anyway, as the issue has currently vanished. This can be closed! I will 
now try to update an other server and play with the brand new 
compression :)

Le mercredi 07 novembre 2012 14:22:36 Baptiste a écrit :
 by the way, do you have a few log line showing the issue to share?
 
 cheers
 
 On Wed, Nov 7, 2012 at 2:20 PM, Baptiste bed...@gmail.com wrote:
  Hi,
  
  Could you add a option  http-server-close in your frontend???
  
  cheers
  
  On Wed, Nov 7, 2012 at 1:48 PM, Guillaume Castagnino 
ca...@xwing.info wrote:
  Hi,
  
  I just updated my haproxy to the current HEAD
  (08289f12f9a13ea06cf4a16a1211e82e003af218).
  I now have acl issues: the hdr_dom matching seems to be ignored.
  This
  was working perfectly fine with the previous build I used
  (1bc4aab2902d732530ccbd098d30e519aab3abdd)
  
  The configuration is quite simple and basic here. See attached.
  I should see the stats page from https://haproxy.xwing.info/,
  but... not anymore with this new build, and the request is passed
  to the backend.
  
  Did I miss something ?
  Nota: I did not yet started to bissect. I will do it later if it
  helps.
  
  Thanks !
  
  --
  Guillaume Castagnino
  
  ca...@xwing.info / guilla...@castagnino.org
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: Protocol plugin

2012-10-08 Thread Guillaume Castagnino
Hi,

Le lundi 08 octobre 2012 23:06:58 kgardenia42 a écrit :
 The problem is that in more than 50% of cases smp_fetch_payload() gets
 called when no data has been read from the client.  I realize that
 normally a load balancer would be in the same data-centre as the pool
 members but it still seems like a bug.  Do you agree?  Actually when
 I think about it - I'm not sure how relevant this even is because the
 client *is* local to the load balancer and the problem occurs before
 the upstream connection is even made so actually I think the upstream
 latency may not be a factor at all.

Do you have played with the tcp-request inspect-delay  option ?
Unless I'm mistaken, I think it can help you, when doing tcp content 
inspection.

Regards,

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: Protocol plugin

2012-10-06 Thread Guillaume Castagnino
Le samedi 06 octobre 2012 17:11:31 kgardenia42 a écrit :
 Hi,

Hi,

 I have a custom TCP protocol I would like to load balance with
 haproxy.
 
 I'd like to implement a very simple stickiness algorithm based on the
 first (say) 10 bytes of client data (which contains a client
 identifier).  Source ip stickiness is not reliable enough.
 
 Is this a common use-case?  Has anyone else implemented this? In
 proto_tcp.c I can see what appears to be a protocol analyzer concept
 which looks in the ballpark of what I need.  Does this seem the right
 way to go?  Can anyone give me any pointers on how to get started?
 
 Thanks.

I think you can inspire you with the SSL ID stickyness explained here to 
stick on data contained in packets: 
http://blog.exceliance.fr/2011/07/04/maintain-affinity-based-on-ssl-
session-id/

Instead of extracting the SSL ID, you extract your client identifier, 
but this is more or less the same thing !

regards,

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




[PATCH] Small doc fix

2012-09-13 Thread Guillaume Castagnino
I noticed that the ssl_sni section is duplicated in configuration. Here
is the (very) small fix.

Thanks !

Guillaume Castagnino (1):
  DOC: duplicate ssl_sni section

 doc/configuration.txt | 9 -
 1 file changed, 9 deletions(-)

-- 
1.7.12




[PATCH] DOC: duplicate ssl_sni section

2012-09-13 Thread Guillaume Castagnino
---
 doc/configuration.txt | 9 -
 1 file changed, 9 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 7be3335..227b50f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -8085,15 +8085,6 @@ req_ssl_ver decimal
   SSL data layer, so this will not work with bind lines having the ssl
   option.
 
-ssl_sni string
-  Returns true when the incoming connection was made over an SSL/TLS data layer
-  which deciphered it and found a Server Name Indication TLS extension sent by
-  the client, matching the specified string. In HTTPS, the SNI field (when
-  present) is equal to the requested host name. This match is different from
-  req_ssl_sni above in that it applies to the connection being deciphered by
-  haproxy and not to SSL contents being blindly forwarded. This requires that
-  the SSL library is build with support for TLS extensions (check haproxy -vv).
-
 ssl_has_sni
   This is used to check for presence of a Server Name Indication TLS extension
   in an incoming connection was made over an SSL/TLS data layer. Returns true
-- 
1.7.12




Re: HTTP redirect using domain extract from original request

2012-09-12 Thread Guillaume Castagnino
Le mercredi 12 septembre 2012 08:47:10 Willy Tarreau a écrit :
 On Wed, Sep 12, 2012 at 08:25:01AM +0200, Willy Tarreau wrote:
  I think it's time to add redirect scheme which would recompose the
  Location header from this scheme, the Host header and the URI. I'm
  going to look into this.
 
 OK, finally here it is. Tested and works OK. Use it this way :
 
  redirect scheme https if !{ is_ssl }

Hi,

Wow that's wonderfull !!
I will test this asap.

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: [ANNOUNCE] haproxy 1.5-dev12

2012-09-10 Thread Guillaume Castagnino
Nice !

Just set up on my personnal server with 2 wildcard certificates. It 
seems to work like a charm :)

I use this, TLSv1.2 enabled (so using openssl 1.0.1):
bind :::443 ssl crt /etc/ssl/startssl/haproxy/xwing.info.pem crt 
/etc/ssl/startssl/haproxy/ ciphers ECDHE-RSA-AES128-SHA256:AES128-GCM-
SHA256:RC4:HIGH:!MD5:!aNULL:!EDH prefer-server-ciphers


Thanks, great job !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: [ANNOUNCE] haproxy 1.5-dev12

2012-09-10 Thread Guillaume Castagnino
Le lundi 10 septembre 2012 15:52:23 Willy Tarreau a écrit :
 Hi Guillaume,
 
 On Mon, Sep 10, 2012 at 03:46:26PM +0200, Guillaume Castagnino wrote:
  Nice !
  
  Just set up on my personnal server with 2 wildcard certificates. It
  seems to work like a charm :)
  
  I use this, TLSv1.2 enabled (so using openssl 1.0.1):
  bind :::443 ssl crt /etc/ssl/startssl/haproxy/xwing.info.pem crt
  
  /etc/ssl/startssl/haproxy/ ciphers
  ECDHE-RSA-AES128-SHA256:AES128-GCM-
  SHA256:RC4:HIGH:!MD5:!aNULL:!EDH prefer-server-ciphers
 
 Nice, thank you for the feedback !

Just one precision on the cert.pem content, to achieve the best 
compliance: it seems that haproxy is fine when feeding the full 
certificate chain in the .pem file instead of only the the 
certificate/private key pair (as suggest on the first SSL announce from 
last week). This make clients that do certificate chain verification 
happy:

So cert.pem contains:
- Server certificate
- Intermediate CA 1 certificate
- Intermediate CA 2 certificate
...
- Intermediate CA n certificate
- Root CA certificate
- Private key

Of course, the number of intermediate CA may change depending on the 
certificate chain of the SSL provider (usually, there is just one 
intermediate CA).


And this is just working flawlessly, making SSL nazis happy ;).

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




HTTP redirect using domain extract from original request

2012-09-10 Thread Guillaume Castagnino
Hi list,

Currently, I'm using a nginx configuration to do protocol upgrade, from 
plain HTTP to HTTPS, basically, on port 80 I have a redirect that catch 
all virtual hosts and just upgrade the protocol, keeping the host and 
request untouched. It looks like this:
rewrite ^ https://$host$request_uri? permanent;

Is there a way to do this only with haproxy without involving some http 
server configuration in the backend ?
In other words, is there a way to do something like this (with 1.5-
dev12) without hardcoding any domain so that a single rule may match any 
virtual host, extracting the domain from the original request:
redirect prefix https://$hdr_dom code 301


From the doc, I see nothing, but I may miss the good trick :)

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: HTTP redirect using domain extract from original request

2012-09-10 Thread Guillaume Castagnino
Le lundi 10 septembre 2012 21:19:40 Baptiste a écrit :
 Hi Guillaume,
 
 You're right, this is not doable with HAProxy, unfortunately.
 The only way you could do that is through redirect with hardcoded
 hostname + acl, as you mentionned in your mail.

Thanks Baptiste,

So that means one acl + one redirect rule per vhost, as I fear. I think 
I will keep my nginx redirect for now, since I want to upgrade *all* 
virtualhosts, preferably without bothering to list all of them :)
Ideally, I would like to keep haproxy vhost agnostic.

Thanks !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org



Re: HAProxy with native SSL support !

2012-09-04 Thread Guillaume Castagnino
, for those who want to test or even have the hardware
 to run more interesting benchmarks, the code was merged into the
 master branch and is in today's snapshot (20120904) here :
 
 http://haproxy.1wt.eu/download/1.5/src/snapshot/
 
 Build it by passing USE_OPENSSL=1 on the make command line. You
 should also include support for linux-2.6 options for better results
 :
 
make TARGET=linux2628 USE_OPENSSL=1
 
 If all goes well by the end of the week, I'll issue -dev12, but I
 expect that we'll have some bugs to fix till then.
 
 BTW, be very careful, openssl is a memory monster. We counted about
 80kB per connection for haproxy+ssl, this is 800 MB for only 10k
 connections! And remember, this is still beta-quality code. Don't
 blindly put this in production (eventhough I did it on 1wt.eu :
 https://demo.1wt.eu/). You have been warned!
 
 Please use the links below :
 site index  : http://haproxy.1wt.eu/
 sources : http://haproxy.1wt.eu/download/1.5/src/snapshot/
 changelog   :
 http://haproxy.1wt.eu/download/1.5/src/snapshot/CHANGELOG Exceliance 
 : http://www.exceliance.fr/en/
 
 Have a lot of fun and please report your success/failures,
 Willy
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: haproxy and interaction with VRRP

2011-12-12 Thread Guillaume Castagnino
Le lundi 12 décembre 2011 10:18:33, Vincent Bernat a écrit :
 Hi!
 
 When haproxy is bound to an IP address managed by VRRP, this IP address
 may be absent when haproxy starts. What is the best way to handle this?
 
   1. Start haproxy only when the host is master.
   2. Use transparent mode.
   3. Patch haproxy to use IP_FREEBIND option.

What about a 4:
- Add net.ipv4.ip_nonlocal_bind=1 to your sysctl.conf settings. No need to 
patch anything

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: RE: x-forwarded-for and server side keep alive

2011-04-12 Thread Guillaume Castagnino
option http-server-close is sufficient and allow client side keep-alive.
Moreover, to achive a good load balancing, server side keepalice NEEDS to be 
disabled (with http-server-close 
option) since mutiple connections inside one keep-alive session are not 
balanced...

Client side keep alive does not matters here.

On Tuesday 12 April 2011 13:53:49 Brian Carpio wrote:
 From the documentation
 
   It is important to note that as long as HAProxy does not support keep-alive
   connections, only the first request of a connection will receive the header.
   For this reason, it is important to ensure that option httpclose is set
   when using this option.
 
   Examples :
 # Public HTTP address also used by stunnel on the same machine
 frontend www
 mode http
 option forwardfor except 127.0.0.1  # stunnel already adds the header
 
 # Those servers want the IP Address in X-Client
 backend www
 mode http
 option forwardfor header X-Client
 
   See also : option httpclose
 
 
 Brian Carpio 
 Senior Systems Engineer
 
 Office: +1.303.962.7242
 Mobile: +1.720.319.8617
 Email: bcar...@broadhop.com
 
 
 -Original Message-
 From: Julien Vehent [mailto:jul...@linuxwall.info] 
 Sent: Tuesday, April 12, 2011 1:55 PM
 To: Haproxy
 Subject: x-forwarded-for and server side keep alive
 
  Hi there,
 
  I browsed the list to look for an answer to this question, without  success, 
 so I hope you can help me on this.
 
  I want to use Haproxy in front of Tomcat. I need to get the client's  IP, so 
 I logically activated 'option 
forwardfor', which works fine.
 
  I also want server-side keepalive. And this is when I discovered that  
 Haproxy sends the x-forwarded-for header 
with the first request of the  keep-alived connection only.
  It seems that tomcat 6.0.32 (that we use) cannot remember the  
 x-forwarded-for value across multiple requests. 
So we would need to send  the header with every request.
 
  My first question is: does anybody see anything wrong with those  
 assumptions ?
 
  Then: is there a way to have x-forwarded-for added to each request  without 
 giving up on server-side keep alive 
?
 
 
  Thanks,
  Julien
 
 
 
-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org




Re: hanging in syn_sent

2010-09-07 Thread Guillaume Castagnino
Le mardi 07 septembre 2010 21:52:24, Joe Williams a écrit :
 Anyone ever seen connections to haproxy hang in a syn_sent state and then
 fail while other connections (to/from the same hosts) work perfectly fine?

Yes, I allready experienced this while using net.ipv4.tcp_tw_recycle=1
This cause the kernel silently drop/ignore some connections without any RST 
(usually when the client is behind a NAT)

If your haproxy host uses this parameter, try disabling it !

-- 
Guillaume Castagnino
ca...@xwing.info / guilla...@castagnino.org



Re: how to associate front and back ends?

2010-06-02 Thread Guillaume Castagnino
Le mercredi 02 juin 2010 21:29:56, M B a écrit :
 How do I associate a front end with a back end?  use_backend appears to
 want a conditional.  I just want to say, for this frontend, always use
 this backend.

Simply use default_backend then. It will match all connections that are not 
caught by previous use_backend rules defined in the current front section.

-- 
Guillaume Castagnino
g.castagn...@pepperway.fr
Tel : +33148242089



Potential problem/incompatibility between haproxy smtpchk and grsecurity blackhole feature ?

2010-03-12 Thread Guillaume Castagnino
Hi,

Here is my setup :
- 2 debian lenny nodes, with haproxy 1.3.22 (lenny backport package)
- kernel 2.6.33 with last grsecurity patch 
(grsecurity-2.1.14-2.6.33-201003071645.patch)
- postfix 2.5.5

haproxy runs on one of the two nodes (wich one is controlled by heartbeat), 
and uses the 2 nodes as backend for HTTP, HTTPS and SMTP.

No problem with HTTP and HTTPS backends.


But I have a problem with the SMTP backends when enabling BLACKHOLE grsecurity 
feature. I spend some time with Brad Spengler (grsec dev) to try to fix this 
within grsec. Tried some patches. But nothing found. There seems to be a 
missing RST packet when closing connection, and for now he found no way to fix 
it whithout disabling BLACKHOLE feature.
Last thought was it could be a problem/bug within haproxy



Symptoms :
- each SMTP probe (smtpchk) results to a socket in the LAST_ACK state on the 
remote backend (the local backend is not affected since BLACKHOLE does not 
affect local sockets).
- Lots of TCP replay from the SMTP backend.
- lots of smtp probes fails probably due to the big quantity of sockets 
remaining in LAST_ACK state. From my stats, it's around 6% of the probes that 
fails.


Attached the haproxy configuration, and 2 small tcpdump captures. One with 
BLACKHOLE enabled (haproxy_smtp_probe_ko.pcap) and the other with BLACKHOLE 
disabled (haproxy_smtp_probe_ok.pcap).



So do you have an idea about this problem ? bug ? not a bug ? incompatibility 
between the two ?


Thanks for your feedback,
Guillaume


-- 
Guillaume Castagnino
g.castagn...@pepperway.fr
Tel : +33148242089
global
log 127.0.0.1   local0
log 127.0.0.1   local1 notice
userhaproxy
group   haproxy
daemon

defaults
log global
option  httplog
option  dontlognull
retries 3
option  redispatch
stats   enable
stats   auth :
maxconn 2000
timeout connect 4s
timeout client 5s
timeout server 30s
timeout http-request 5s


backend back-http
balance roundrobin
modehttp
option  httpclose
option  forwardfor header X-Client
option  httpchk HEAD /.check HTTP/1.0
cookie  SERVERID insert nocache indirect
server  pepperway-prod1 pepperway-prod1:80 cookie pool1 check inter 
2000 rise 2 fall 5 maxconn 200
server  pepperway-prod2 pepperway-prod2:80 cookie pool2 check inter 
2000 rise 2 fall 5 maxconn 200

backend back-https
balance source
modetcp
option  ssl-hello-chk
server  pepperway-prod1 pepperway-prod1:443 check inter 2000 rise 2 
fall 5 maxconn 100
server  pepperway-prod2 pepperway-prod2:443 check inter 2000 rise 2 
fall 5 maxconn 100

backend back-smtp
balance roundrobin
modetcp
option  smtpchk EHLO pepperway.fr
server  pepperway-prod1 pepperway-prod1:25 check inter 2000 rise 2 fall 
5 maxconn 100
server  pepperway-prod2 pepperway-prod2:25 check inter 2000 rise 2 fall 
5 maxconn 100


frontend front-webapp 87.98.142.217:80
modehttp
default_backend back-http

frontend front-webapp2 91.121.61.220:80
modehttp
default_backend back-http

frontend front-webapp-ssl 87.98.142.217:443
modetcp
default_backend back-https

frontend front-smtp 87.98.142.217:25
modetcp
default_backend back-smtp



haproxy_smtp_probe_ko.pcap
Description: Binary data


haproxy_smtp_probe_ok.pcap
Description: Binary data


Re: Potential problem/incompatibility between haproxy smtpchk and grsecurity blackhole feature ?

2010-03-12 Thread Guillaume Castagnino
Hi,

Le Vendredi 12 Mars 2010 21:55:07, Willy Tarreau a écrit :
 BTW, when the RST is sent, it is because haproxy has already closed the
 socket and does not own it anymore.

If I understand correctly what Brad said me, the key of the problem could be 
arround this point.

 Your captures were very useful. I'll contact Brad about that. Maybe if
 he explains me how his patch works, it will help him find how to fix it.
 Do you mind if I CC you ?

No problem to CC me (but Brad knows me with my personnal adress, not my 
professionnal one if you want to CC : ca...@xwing.info :))

I can of course provide more informations if you need.


Thanks,
Guillaume

-- 
Guillaume Castagnino
g.castagn...@pepperway.fr
Tel : +33148242089



Re: Potential problem/incompatibility between haproxy smtpchk and grsecurity blackhole feature ?

2010-03-12 Thread Guillaume Castagnino
Le Vendredi 12 Mars 2010 22:21:49, Willy Tarreau a écrit :
 On Fri, Mar 12, 2010 at 09:55:07PM +0100, Willy Tarreau wrote:
  I've just looked at your traces. It's strange that it's related to the
  blackhole feature because the doc says it just disables sending of
  port unreachables (and possibly RSTs). From your traces, an RST is
  properly sent in response to the 250, but the server happily
  ignores despite the fact that its sequence number is OK, and it
  keeps resending the same data over and over. And as your trace
  shows that you sniffed on the server, there's no risk that the
  RST was dropped on the network.
 
 After a bit of thinking, while it is wrong from the server to have
 ignored the RST in the first place, it's wrong for the client not
 to resend it on subsequent packets, and this is what is caused by
 the BLACKHOLE patch. I've checked the patch, and I see what is
 wrong in it : it prevents sending of RST packets in any case,
 while it should only be prevented in response to a SYN. I have one
 similar patch in my own 2.4 tree which does not exhibit the issue,
 so I'll contact Brad with that.


Here is the last patch Brad provided me against the last grsec (if you want to 
check this one) : http://www.grsecurity.net/~spender/blackhole3.diff

But despites this, I always get the same problem.


Guillaume

-- 
Guillaume Castagnino
g.castagn...@pepperway.fr
Tel : +33148242089