Question sur HAProxy

2015-04-14 Thread christophe cretiaux
Bonjour,

Je fais actuellement des recherches sur votre solution proxy,
j'aurais aimé savoir si :
- Votre solution proposais une interface graphique afin de configuré le
proxy
- Si elle est compatible en machine virtuelle Hyper-V
- S'il y à la possibilité d'intégré une blacklist d'URL manuellement et/ou
automatiquement
- Et enfin si votre solution est compatible avec Active Directory

Je m'excuse si jamais ses informations ce trouve sur votre site, je ne les
ai pas vu.


Cordialement.

-- 
CRETIAUX Christophe - Etudiant en réseaux et télécommunications deuxième
année.


Re: possible header capture corruption when timeout queue

2015-04-14 Thread David Birdsong
On Sat, Apr 11, 2015 at 2:16 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi David,

 On Thu, Apr 09, 2015 at 04:01:44PM -0700, David Birdsong wrote:
  Ok, false alarm.
 
  We have corruption in our log parsing stream so that's what the rest of
 my
  week will be centered around.

 OK, cool (for the rest of us). I was suspecting that the captures were
 initialized too late and were showing data from previous sessions (which
 could have been possible).

 However there's something that worries me in what you're writing :

For the time
being, we're stuck at 1.5.5 since 1.5.6 changes hashes for
 path-based
hashing. We're working on re-arranging our infra to not take such a
 hit
were
our working set request hashing to be redrawn which will happen
 with =
1.5.6.

 What's the problem exactly ? Is it related to this patch ?


Yes, this will redraw our entire cache.



   commit ac0329261bc9f4001e9d313ee879d193c05ca56e
   Author: Willy Tarreau w...@1wt.eu
   Date:   Fri Oct 17 12:11:50 2014 +0200

 BUG/MEDIUM: backend: fix URI hash when a query string is present

 Commit 98634f0 (MEDIUM: backend: Enhance hash-type directive with an
 algorithm options) cleaned up the hashing code by using a centralized
 function. A bug appeared in get_server_uh() which is the URI hashing
 function. Prior to the patch, the function would stop hashing on the
 question mark, or on the trailing slash of a maximum directory count.
 Consecutive to the patch, this last character is included into the
 hash computation. This means that :

 GET /0
 GET /0?

 Are not hashed similarly. The following configuration reproduces it :

 mode http
 balance uri
 server s1 0.0.0.0:1234 redir /s1
 server s2 0.0.0.0:1234 redir /s2

 Many thanks to Vedran Furac for reporting this issue. The fix must
 be backported to 1.5.
 (cherry picked from commit fad4ffc89337277f3d5ed32b66986730e891558a)

 If so, is there anything that could be done in haproxy to work around your
 issue. I mean, we're talking a great care of not introducing regressions
 in the stable branch, which is why I don't want to see any feature
 backports
 there anymore (I learned my lesson with 1.4). So if we can do anything so
 that 1.5.12 is a safe upgrade for you, please suggest.


I don't think we need a work-around. We're only stuck because our system
expects a level of cache locality that the hash provides. A sudden redraw
would be pretty disastrous, but we're working on removing this cache
locality requirement and we'll be able to upgrade to the latest version.



Regards,
 Willy




Re: limiting conn-curs per-ip using x-forwarded-for

2015-04-14 Thread Klavs Klavsen

I sniffed the traffic on haproxy and the requests looks fine:

GET /php-sleep.php?43 HTTP/1.1
User-Agent: curl/7.35.0
Host: kms.example.org
Accept: */*
X-Forwarded-For: 123.149.124.91

HTTP/1.1 200 OK
Server: Apache
Content-Type: text/html; charset=UTF-8
Content-Length: 34
Accept-Ranges: bytes
Date: Tue, 14 Apr 2015 07:03:40 GMT
X-Varnish: 2130622187 2130622186
Age: 0
Via: 1.1 varnish
Connection: keep-alive
X-Varnish-Cache: HIT
X-Varnish-Cache-Hits: 1

FinishbrbrSlept for 43 seconds

but while the requests are running the table is empty:
# table: kms-ds-nocache, type: ip, size:102400, used:0


Klavs Klavsen wrote on 04/14/2015 08:49 AM:

Hi Baptiste,

Thank you very much for your help.

Unfortunately it didn't work.. I tried this:

frontend kms-ds-nocache
   bind x.x.x.x:80
   mode  http
   balance  roundrobin
   default_backend  kms-ds-backend
   option  httplog
   option  accept-invalid-http-request
   stick-table  type ip size 100k expire 30s store conn_cur
   tcp-request content accept  if HTTP
   tcp-request content reject  if { sc1_conn_cur ge 2 }
   tcp-request content track-sc1  hdr(X-Forwarded-For)
   tcp-request inspect-delay  5s

and I was still able to have 5 connections.. (I call a php script, using
curl which sleeps for 40 seconds :)

Baptiste wrote on 04/09/2015 11:28 PM:

Hi Klavs,

Please give a try to the configuration below:
frontend nocache
   mode  http
..
   option  httplog
   option  accept-invalid-http-request
   stick-table  type ip size 100k expire 30s store conn_cur
   tcp-request inspect-delay 5s
   tcp-request content accept if HTTP
   tcp-request content track-sc1  hdr(X-Forwarded-For)
   tcp-request content reject  if { sc1_conn_cur ge 10 }

'tcp-request connection' is executed when the connection has just
arrived into HAProxy. So the header X-Forwarded-For might not yet be
read already.
the conf above uses the 'tcp-request content' instead, and to be sure
we'll find the header, I've added the inspect delay which accept the
request once the buffer is confirmed to contain HTTP.

Baptiste


On Tue, Apr 7, 2015 at 12:33 PM, Klavs Klavsen k...@vsen.dk wrote:

Back from easter vacation :)

Baptiste wrote on 03/25/2015 10:30 AM:


Hi,

some useful examples can be taken from this blog post:

http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/


Just replace src by hdr(X-Forwarded-For).



Tried:

frontend nocache
   mode  http
..
   option  httplog
   option  accept-invalid-http-request
   stick-table  type ip size 100k expire 30s store conn_cur
   tcp-request connection reject  if { src_conn_cur ge 10 }
   tcp-request connection track-sc1  hdr(X-Forwarded-For)
..

but haproxy complains:
'tcp-request connection track-sc1' : fetch method 'hdr(X-Forwarded-For)'
extracts information from 'HTTP request headers,HTTP response
headers', none
of which is available here

I took the example from
http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/


:(


--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
   --Henry Spencer








--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
  --Henry Spencer




Re: limiting conn-curs per-ip using x-forwarded-for

2015-04-14 Thread Klavs Klavsen

Hi Baptiste,

Thank you very much for your help.

Unfortunately it didn't work.. I tried this:

frontend kms-ds-nocache
  bind x.x.x.x:80
  mode  http
  balance  roundrobin
  default_backend  kms-ds-backend
  option  httplog
  option  accept-invalid-http-request
  stick-table  type ip size 100k expire 30s store conn_cur
  tcp-request content accept  if HTTP
  tcp-request content reject  if { sc1_conn_cur ge 2 }
  tcp-request content track-sc1  hdr(X-Forwarded-For)
  tcp-request inspect-delay  5s

and I was still able to have 5 connections.. (I call a php script, using 
curl which sleeps for 40 seconds :)


Baptiste wrote on 04/09/2015 11:28 PM:

Hi Klavs,

Please give a try to the configuration below:
frontend nocache
   mode  http
..
   option  httplog
   option  accept-invalid-http-request
   stick-table  type ip size 100k expire 30s store conn_cur
   tcp-request inspect-delay 5s
   tcp-request content accept if HTTP
   tcp-request content track-sc1  hdr(X-Forwarded-For)
   tcp-request content reject  if { sc1_conn_cur ge 10 }

'tcp-request connection' is executed when the connection has just
arrived into HAProxy. So the header X-Forwarded-For might not yet be
read already.
the conf above uses the 'tcp-request content' instead, and to be sure
we'll find the header, I've added the inspect delay which accept the
request once the buffer is confirmed to contain HTTP.

Baptiste


On Tue, Apr 7, 2015 at 12:33 PM, Klavs Klavsen k...@vsen.dk wrote:

Back from easter vacation :)

Baptiste wrote on 03/25/2015 10:30 AM:


Hi,

some useful examples can be taken from this blog post:

http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/

Just replace src by hdr(X-Forwarded-For).



Tried:

frontend nocache
   mode  http
..
   option  httplog
   option  accept-invalid-http-request
   stick-table  type ip size 100k expire 30s store conn_cur
   tcp-request connection reject  if { src_conn_cur ge 10 }
   tcp-request connection track-sc1  hdr(X-Forwarded-For)
..

but haproxy complains:
'tcp-request connection track-sc1' : fetch method 'hdr(X-Forwarded-For)'
extracts information from 'HTTP request headers,HTTP response headers', none
of which is available here

I took the example from
http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/

:(


--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
   --Henry Spencer





--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
  --Henry Spencer




Statistics in multi-process mode

2015-04-14 Thread hiepnv

 Hello everyone,

I am Hiep. My company have been using haproxy version 1.5.11 for web 
server load balancing and caching. It has a problem that is the server 
run haproxy has 12 cores then to explore the power of server, I ran 
haproxy in multi-process mode. But the statistic for multi-process mode 
is per process statistic, not total statistic for all processes.


Therefore, I wrote code to support statistics in multi-process mode and 
want to add this to haproxy in future releases. Dose it has any 
collision with other works? And how can I contribute my code?


Thanks and Best Regards,

Hiep Nguyen,
hie...@vccloud.vn



Re: Statistics in multi-process mode

2015-04-14 Thread Björn Zettergren
On Tue, Apr 14, 2015 at 9:35 AM, hiepnv hie...@vccloud.vn wrote:
 Therefore, I wrote code to support statistics in multi-process mode and want
 to add this to haproxy in future releases.
 And how can I contribute my code?

I don't know if there are other works in progress to support what
you're looking to share, but there's general notes available in
section 5 of the README in the haproxy sources How To Contribute:
https://github.com/haproxy/haproxy

The first step would probably be to submit a patch for general
review/comments :-)

Best Regards
Björn Zettergren



Re: HA proxy - Need infromation

2015-04-14 Thread Baptiste
Hi Thibault,

You can contact haproxy.com, we have a nice GUI and an API on top of
HAProxy in our ALOHA appliance.
And we speak French :)
Just give a call and ask to speak to Sean (+33 1 30 67 60 74)

Baptiste


On Mon, Apr 13, 2015 at 4:55 PM, Thibault Labrut
thibault.lab...@enioka.com wrote:
 Hello,

 I currently installing HAProxy with keepalived to one of my clients.

 To facilitate the administration of this tool, I would like to know if you
 can advise me of administration web gui for HA proxy.

 Thank you for your help.

 Best regards,
 --
 Thibault Labrut
 enioka
 24 galerie Saint-Marc
 75002 Paris
 +33 615 700 935
 +33 144 618 314



Re: possible header capture corruption when timeout queue

2015-04-14 Thread Willy Tarreau
On Mon, Apr 13, 2015 at 11:31:46PM -0700, David Birdsong wrote:
  What's the problem exactly ? Is it related to this patch ?
 
 Yes, this will redraw our entire cache.

Does it mean you have *that* many URLs ending with /? ? Because
normally it's the only case where it makes a difference. Would you
prefer that we add an option such as hash-empty-query or something
like this ?

 I don't think we need a work-around. We're only stuck because our system
 expects a level of cache locality that the hash provides. A sudden redraw
 would be pretty disastrous, but we're working on removing this cache
 locality requirement and we'll be able to upgrade to the latest version.

But cache locality is something normal and expected (otherwise hashes
would not be used for load balancing). I'm surprized that the specific
case where there's a difference makes so much of a difference, that's
why I'd like to ensure that 1) the case where this happens is really
significant, and 2) we find a solution so that you don't need to change
the way your infrastructure works.

Regards,
Willy




Re: HA proxy - Need infromation

2015-04-14 Thread Thibault Labrut
Hi,

But I search a GUI to manage Ha proxy (add/remove services for example).

Bes regards,
-- 
Thibault Labrut
enioka
24 galerie Saint-Marc
75002 Paris
+33 615 700 935
+33 144 618 314

De :  Igor Cicimov ig...@encompasscorporation.com
Date :  mardi 14 avril 2015 02:56
À :  Thibault Labrut thibault.lab...@enioka.com
Cc :  haproxy@formilux.org
Objet :  Re: HA proxy - Need infromation



On Tue, Apr 14, 2015 at 12:55 AM, Thibault Labrut
thibault.lab...@enioka.com wrote:
 Hello,
 
 I currently installing HAProxy with keepalived to one of my clients.
 
 To facilitate the administration of this tool, I would like to know if you can
 advise me of administration web gui for HA proxy.

Look for stats in the HAP documentation.
 
 
 Thank you for your help.
 
 Best regards,
 -- 
 Thibault Labrut
 enioka
 24 galerie Saint-Marc
 75002 Paris
 +33 615 700 935
 +33 144 618 314






Re: Achieving Zero Downtime Restarts at Yelp

2015-04-14 Thread Pavlos Parissis
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On 13/04/2015 07:24 ??, Joseph Lynch wrote:
 Hello,
 
 I published an article today on Yelp's engineering blog 
 (http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-re
loads.html)

 
that shows a technique we use for low latency, zero downtime restarts of
 HAProxy. This solves the when I restart HAProxy some of my clients
 get RSTs problems that can occur. We built it to solve the RSTs in
 our internal load balancing, so there is a little more work to be
 done to modify the method to work with external traffic, which I
 talk about in the post.
 

thanks for sharing this very detailed article.

You wrote that
'As of version 1.5.11, HAProxy does not support zero downtime restarts
or reloads of configuration. Instead, it supports fast...'

Was zero downtime supported before 1.5.11? I believe not.

Cheers,
Pavlos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVLM1AAAoJEIP8ktofcXa5c68P/jVowVkpPduCSXX9I4UzoCe4
NjpboenXCJJb3ubvoHvWVJE4DTv2WfQMbAUXH//0NnPhO6RvxMKCQ8Qa2QmtS4UF
kIn1FSJ/Olbo4Kf4jlH80gjiammFSvo6Dc7v/IPdYhgvTTOeKDNcV/tR5NvQG9yJ
y7Y6aFHgrZtgcOriIv/reus77L4USDFRDikzrPrI/J2wWCZDSkjsJPF+YNrENcm3
kiOSbVo6ZF1EByM16vruOua2i0fG6MmnM73TVZwLqfKNYJfLP0VwB2FoJYI4JyKR
K77jdDStDg8PUYEUcwhAr5eFzSaJUglnbYA7zNHaDGQWyu0LE26gFw4AMCB8jDaE
4bveTI9sLnD4PPbIIpscDtOc0zp+xeSY3DLh+v2TP7YbMncjkyGsHGGhj9a7AxFf
Ne6WKHcbh2szLfvvAYxRZWr8ltl5xIud03p75HBMYUGRf37RlOcK7cBhMEHiPaCM
hF26KEZFem6AUjlB6TyOXYg0WlifR0o1Z+gm8FT+0my4fDLp82XJ+2O0Vg5Cc9Np
iNcdEYB6x2W2zhlhwpCIa+JVeLyBmpPo9gUzhPRi/jwhvnrwD8IJV2e+jN5VATr8
8sR/ht8GZLtQ1ZviXt31BtEGQwPAH4g7eRuHLbNSEIrDFjb+w23Ki62gvn3NEGe8
JGouYKKyFMcMgZdwJHM0
=WCRB
-END PGP SIGNATURE-



Re: Statistics in multi-process mode

2015-04-14 Thread hiepnv

Ok, Thanks for your response :-)

On 14/04/2015 15:19, Björn Zettergren wrote:

On Tue, Apr 14, 2015 at 9:35 AM, hiepnv hie...@vccloud.vn wrote:

Therefore, I wrote code to support statistics in multi-process mode and want
to add this to haproxy in future releases.
And how can I contribute my code?


I don't know if there are other works in progress to support what
you're looking to share, but there's general notes available in
section 5 of the README in the haproxy sources How To Contribute:
https://github.com/haproxy/haproxy

The first step would probably be to submit a patch for general
review/comments :-)

Best Regards
Björn Zettergren





redis redispatch question

2015-04-14 Thread Jim Gronowski
Good day, everyone.

I'm using HAproxy in front of a redis sentinel cluster.  If has worked very 
well, but this morning I ran into a small problem.  The sentinel cluster 
elected a new master, and HAproxy correctly detected the change and updated 
accordingly  (new connections went to the correct server).  However, one of our 
client web applications kept the connection open to the old server (now a 
slave), generating errors.  I'm guessing it's due to keepalives.

Will 'option redispatch' correct this?  If not, is there a preferred way to 
close the connection and force the client to reconnect?  It doesn't necessarily 
have to be graceful, although that would be nice.

Pertinent config below.

-Jim


defaults
 log global
modetcp
option  tcplog
option  dontlognull
option clitcpka
option srvtcpka
timeout connect 5000
timeout client  3m
timeout server  12
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend redisFE
bind *:6379
mode tcp
frontend redisFE
bind *:6379
mode tcp
maxconn 10240
default_backend redisBE

backend redisBE
mode tcp
option tcplog
balance source
option tcp-check
#tcp-check send AUTH\ foobar\r\n
#tcp-check expect +OK
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server redis-01 127.0.0.1:6379 maxconn 1024 check inter 1s
server redis-02 127.0.0.2:6379 maxconn 1024 check inter 1s


Ditronics, LLC email disclaimer:
This communication, including attachments, is intended only for the exclusive 
use of addressee and may contain proprietary, confidential, or privileged 
information. Any use, review, duplication, disclosure, dissemination, or 
distribution is strictly prohibited. If you were not the intended recipient, 
you have received this communication in error. Please notify sender immediately 
by return e-mail, delete this communication, and destroy any copies.


Re: httpchk failures

2015-04-14 Thread Benjamin Smith
Igor, 

Thanks for the response; I didn't see this email until just now as it didn't 
go through the mailing list and so wasn't filtered as expected. 

I spent my morning trying everything I could think of to get haproxy's agent-
check to work consistently. The main symptom is that haproxy would mark hosts 
with the status of DRAIN and provide no clues as to why, even with log-
health-checks on. After a *lot* of trial and error, I've found the following 
that seem to be bugs, on the latest 1.5.11 release, running on CentOS 6. 

1) agent-check output words sometimes handled inconsistently, ignored, or 
misunderstood if   was used instead of , as a separator. 

This is understood: 
echo ready,78%\r\n

This line often causes a DRAIN state. A restart of haproxy was insufficient to 
clear the DRAIN state: (see #3) 
echo ready 78%\r\n

2) Inconsistent logging of DRAIN status change when health logging was on. 
(the server would turn blue in the stats page without any logging as to why. 
Logging status would sometimes say  Server $service/$name is UP (leaving 
forced drain) even as the stats page continues to report DRAIN state! 

3) Even when the agent output was amended as above, for hosts that were set to 
the DRAIN state pursuant to #1 issue were not brought back to ready/up state 
until enable health $service/$host and/or enable agent $service/$host was 
sent to the stats port. 

4) Setting the server weight to 10 seems to help a significant amount. If, in 
fact, haproxy can't handle 35% of 1 it should throw an error on startup IMHO. 

See also my comments interspersed below: 

Thanks, 

Benjamin Smith 


On Tuesday, April 14, 2015 10:50:31 AM you wrote:
 On Tue, Apr 14, 2015 at 10:11 AM, Igor Cicimov 
 
 ig...@encompasscorporation.com wrote:
  On Tue, Apr 14, 2015 at 5:00 AM, Benjamin Smith li...@benjamindsmith.com
  
  wrote:
  We have 5 Apache servers behind haproxy and we're trying to enable use
  the
  httpchk option along with some performance monitoring. For some reason,
  haproxy keeps thinking that 3/5 apache servers are down even though
  it's
  obvious that haproxy is both asking the questions and the servers are
  answering.
  
  Is there a way to log httpchk failures? How can I ask haproxy why it
  seems to
  think that several apache servers are down?
  
  Our config:
  CentOS 6.x recently updated, 64 bit.
  
  Performing an agent-check manually seems to give good results. The below
  result is immediate:
  [root@xr1 ~]# telnet 10.1.1.12 9333
  Trying 10.1.1.12...
  Connected to 10.1.1.12.
  Escape character is '^]'.
  up 78%
  Connection closed by foreign host.
  
  
  I can see that xinetd on the logic server got the response:
  Apr 13 18:45:02 curie xinetd[21890]: EXIT: calcload333 status=0 pid=25693
  duration=0(sec)
  Apr 13 18:45:06 curie xinetd[21890]: START: calcload333 pid=26590
  from=:::10.1.1.1
  
  
  I can see that apache is serving happy replies to the load balancer:
  [root@curie ~]# tail -f /var/log/httpd/access_log | grep -i 10.1.1.1 
  10.1.1.1 - - [13/Apr/2015:18:47:15 +] OPTIONS / HTTP/1.0 302 - -
  -
  10.1.1.1 - - [13/Apr/2015:18:47:17 +] OPTIONS / HTTP/1.0 302 - -
  -
  10.1.1.1 - - [13/Apr/2015:18:47:19 +] OPTIONS / HTTP/1.0 302 - -
  -
  ^C
  
  I have a feeling you might have been little bit confused here. Per my
  understanding, and your configuration:
  
  server server10 10.1.1.10:20333 maxconn 256 *check agent-check agent-port
  9333 agent-inter 4000*
  
  the HAP is doing a health check on the agent you are using and not on the
  Apache so the apache response in this case looks irrelevant to me. I don't
  know how did you setup the agent since you haven't posted that part but
  this is an excellent article by Malcolm Turnbull, the inventor of
  agent-check, that might help:
  
  
  http://blog.loadbalancer.org/open-source-windows-service-for-reporting-ser
  ver-load-back-to-haproxy-load-balancer-feedback-agent/

We used this exact blog entry as our starting point. In our case, the xinetd 
script compares load average, apache process count, cpu info and a little salt 
to come up with a number ranging from 0% to 500%. 


 and press enter twice and check the output. Other option is using curl:
 
 $ curl -s -S -i --http1.0 -X OPTIONS http://10.1.1.12:20333/


[root@xr1 ~]# curl -s -S -i --http1.0 -X OPTIONS http://10.1.1.12:20333
HTTP/1.1 302 Found
Date: Tue, 14 Apr 2015 23:39:40 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.3
Set-Cookie: PHPSESSID=3ph0dvg4quebl1b2e711d8i5p1; path=/; secure
Cache-Control: public, must-revalidate, max-age=0
X-Served-By: curie.-SNIP-
Location: /mod.php/index.php
Vary: Accept-Encoding
Content-Length: 0
Connection: close
Content-Type: text/html; charset=UTF-8


 and some variations of the above that I often use to check the headers only:
 
 $ curl -s -S -I --http1.0 -X OPTIONS http://10.1.1.12:20333/
 $ curl -s -S -D - --http1.0 -X OPTIONS http://10.1.1.12:20333/
 
 You can also try the health check with 

Long ACLs

2015-04-14 Thread CJ Ess
What is the best way to deal with long ACLs with HAProxy. For instance
Amazon EC2 has around 225 address blocks. So if I wanted to direct requests
originating from EC2 to a particular backend, thats a lot of CIDRs to
manage and compare against. Any suggestions how best to approach a
situation like this?


Re: Achieving Zero Downtime Restarts at Yelp

2015-04-14 Thread CJ Ess
I think the gold standard for graceful restarts is nginx - it will start a
new instance (could be a new binary), send the accept fd's to the new
instance, then the original instance will stop accepting new requests and
allow the existing connections to drain off. The whole process is
controlled by signals and you can even decide there is a problem with the
new instance and have the old one resume taking traffic. I love it because
I can bounce nginx all day long and noone notices. I could see haproxy
having the same ability when nbproc = 1, but not exactly a two weekend
project.


On Mon, Apr 13, 2015 at 1:24 PM, Joseph Lynch joe.e.ly...@gmail.com wrote:

 Hello,

 I published an article today on Yelp's engineering blog (
 http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html)
 that shows a technique we use for low latency, zero downtime restarts of
 HAProxy. This solves the when I restart HAProxy some of my clients get
 RSTs problems that can occur. We built it to solve the RSTs in our
 internal load balancing, so there is a little more work to be done to
 modify the method to work with external traffic, which I talk about in the
 post.

 The solution basically consists of using Linux queuing disciplines to
 delay SYN packets for the duration of the restart. It can definitely be
 improved by further tuning the qdiscs or replacing the iptables mangle with
 a u8/u32 tc filter, but I decided it was better to talk about the idea and
 if the community likes it, then we can optimize it further.

 -Joey



Re: redis redispatch question

2015-04-14 Thread Baptiste
On Tue, Apr 14, 2015 at 5:12 PM, Jim Gronowski jgronow...@ditronics.com wrote:
 Good day, everyone.



 I'm using HAproxy in front of a redis sentinel cluster.  If has worked very
 well, but this morning I ran into a small problem.  The sentinel cluster
 elected a new master, and HAproxy correctly detected the change and updated
 accordingly  (new connections went to the correct server).  However, one of
 our client web applications kept the connection open to the old server (now
 a slave), generating errors.  I'm guessing it's due to keepalives.



 Will 'option redispatch' correct this?  If not, is there a preferred way to
 close the connection and force the client to reconnect?  It doesn't
 necessarily have to be graceful, although that would be nice.



 Pertinent config below.



 -Jim





 defaults

  log global

 modetcp

 option  tcplog

 option  dontlognull

 option clitcpka

 option srvtcpka

 timeout connect 5000

 timeout client  3m

 timeout server  12

 errorfile 400 /etc/haproxy/errors/400.http

 errorfile 403 /etc/haproxy/errors/403.http

 errorfile 408 /etc/haproxy/errors/408.http

 errorfile 500 /etc/haproxy/errors/500.http

 errorfile 502 /etc/haproxy/errors/502.http

 errorfile 503 /etc/haproxy/errors/503.http

 errorfile 504 /etc/haproxy/errors/504.http



 frontend redisFE

 bind *:6379

 mode tcp

 frontend redisFE

 bind *:6379

 mode tcp

 maxconn 10240

 default_backend redisBE



 backend redisBE

 mode tcp

 option tcplog

 balance source

 option tcp-check

 #tcp-check send AUTH\ foobar\r\n

 #tcp-check expect +OK

 tcp-check send PING\r\n

 tcp-check expect string +PONG

 tcp-check send info\ replication\r\n

 tcp-check expect string role:master

 tcp-check send QUIT\r\n

 tcp-check expect string +OK

 server redis-01 127.0.0.1:6379 maxconn 1024 check inter 1s

 server redis-02 127.0.0.2:6379 maxconn 1024 check inter 1s



 Ditronics, LLC email disclaimer:
 This communication, including attachments, is intended only for the
 exclusive use of addressee and may contain proprietary, confidential, or
 privileged information. Any use, review, duplication, disclosure,
 dissemination, or distribution is strictly prohibited. If you were not the
 intended recipient, you have received this communication in error. Please
 notify sender immediately by return e-mail, delete this communication, and
 destroy any copies.


Hi Jim,

You're missing the parameter on-marked-down shutdown-sessions on your
server lines.
It will kill sessions established on a server when it is marked as
DOWN by the health checking.

Baptiste



Re: How to revoke an entry from stick table base on url pattern

2015-04-14 Thread Thierry FOURNIER
On Tue, 14 Apr 2015 14:36:12 +0200
Javathoughts ba...@free.fr wrote:

 Hello,
 
 We have to implement a reverse proxy and are under POC phase with HAproxy.
 
 A requirement is to revoke session on specific url matching. This is due to
 the client and server protocol and we don't have hand on them to make
 enhancement on this behaviour.
 
 The approach we are testing is with stick table but we don't find a method
 to revoke an entry on a rule based on the url received.


Hi, you can try with maps. The commands are:

   http-request set-map(file name) key fmt value fmt
   http-request del-map(file name) key fmt

   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-http-request

   acl ...,map(file name)

   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.1-map

file name can be an empty file.

You can check the content of your map with the stat socket. You can
also try lookups in the map (mainly for debug)

   socat - unix-connect:haproxy socket path  show map file name
   socat - unix-connect:haproxy socket path  get map file name key

   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-show%20map
   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-get%20map

and also ad or remove entries:

   socat - unix-connect:haproxy socket path  del map file name key
   socat - unix-connect:haproxy socket path  set map file name key 
value

   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-del%20map
   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-set%20map

Thierry


 Kind Regards