To validate the client certificate via OCSP

2014-06-23 Thread Stephen Wang
Hi HAProxy,

In my setup there is an OCSP Responder storing all the client certificates 
revocation status, is there any way I can have the configuration so that the 
HAProxy can talk with the OCSP Responder via OCSP to check the client's 
certificate before the validation?

Thanks a lot.
P.S. I saw OCSP stapling is mentioned in the 1.5 release, but I guess it's 
something different.


STEPHEN WANG
Technical Product Manager

CGC/SD3
No 42 Jianzhong Rd Tianhe High-tech Ind.
510630, GuangZhou, China
SMS/MMS +8618620267399



Re: keep-alive on server side

2014-06-23 Thread Jie Jin
Nginx has this feature: connection pool
http://nginx.com/blog/load-balancing-with-nginx-plus-part2/


谢谢
金杰 (Jie Jin)


On Fri, Jun 20, 2014 at 6:38 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi,


  Is it possible to use HTTP keep-alive between haproxy and
  backend even if client does not use it?
  Client closes connection, but haproxy still maintains open
  connection to backend (based on some timeout) and re-use it
  when new request arrives.
 
  It will save some time for new connection setup between haproxy
   and backend and can be useful in case when server responds
  very fast (and connection rate it high).

 That would be connection pooling/multliplexing and is not currently
 supported. It is also pretty complex, not sure whether this is
 something that will be done in 1.6.


 Regards,

 Lukas





problem w/ host header on haproxy.org download servers

2014-06-23 Thread Bernhard Weißhuhn
Hi,

I noticed a strange behavior on the haproxy.org servers, which unfortunately is 
being triggered trying to download the source from a chef-client.

When downloading the tar.gz, the chef client sends :80 as part of the host 
header (which is legal from my understanding of the rfc).
This header reliably results in a 404, whereas leaving out the port number 
results in a successful download:

This does not work:

root@frontend2:~# curl -I -H Host: haproxy.org:80  
http://haproxy.org/download/1.5/src/haproxy-1.5.0.tar.gz
HTTP/1.1 404 Not Found
Date: Mon, 23 Jun 2014 12:03:01 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

This works:

root@frontend2:~# curl -I -H Host: haproxy.org  
http://haproxy.org/download/1.5/src/haproxy-1.5.0.tar.gz
HTTP/1.1 200 OK
Date: Mon, 23 Jun 2014 12:03:05 GMT
Server: Apache (Unix; Formilux/0.1.8)
Last-Modified: Thu, 19 Jun 2014 19:06:22 GMT
Accept-Ranges: bytes
Content-Type: application/x-gzip
Expires: Mon, 23 Jun 2014 13:58:48 GMT
Cache-Control: max-age=28800
Content-Length: 1329040
Age: 21857
X-Cache: HIT from haproxy.org
Set-Cookie: sid=c; path=/
Cache-control: private

Do you consider this a misconfiguration as well and could this possibly be 
corrected? Altering the behavior of the chef client to leave out that 
superfluous port number is very cumbersome.

cheers,
  bkw


Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Bernhard Weißhuhn
Addendum:

This only happens on ipv4, ipv6 on 2001:7a8:363c:2::2 is fine:

bkw@Aeronaut:~$ curl -6 -I -H Host: haproxy.org:80 
http://haproxy.org/download/1.5/src/haproxy-1.5.0.tar.gz
HTTP/1.1 200 OK
Date: Mon, 23 Jun 2014 12:38:00 GMT
Last-Modified: Thu, 19 Jun 2014 19:06:22 GMT
Accept-Ranges: bytes
Content-Length: 1329040
Cache-Control: max-age=28800
Expires: Mon, 23 Jun 2014 20:38:00 GMT
Content-Type: application/x-gzip
Server: Apache (Unix; Formilux/0.1.8)


On 23.06.2014, at 14:08, Bernhard Weißhuhn b...@codingforce.com wrote:

 Hi,
 
 I noticed a strange behavior on the haproxy.org servers, which unfortunately 
 is being triggered trying to download the source from a chef-client.
 
 When downloading the tar.gz, the chef client sends :80 as part of the host 
 header (which is legal from my understanding of the rfc).
 This header reliably results in a 404, whereas leaving out the port number 
 results in a successful download:
 
 [...]



Re: keep-alive on server side

2014-06-23 Thread Willy Tarreau
Hi,

On Fri, Jun 20, 2014 at 12:38:48PM +0200, Lukas Tribus wrote:
 Hi,
 
 
  Is it possible to use HTTP keep-alive between haproxy and
  backend even if client does not use it?
  Client closes connection, but haproxy still maintains open
  connection to backend (based on some timeout) and re-use it
  when new request arrives.
 
  It will save some time for new connection setup between haproxy
   and backend and can be useful in case when server responds
  very fast (and connection rate it high).
 
 That would be connection pooling/multliplexing and is not currently
 supported. It is also pretty complex, not sure whether this is
 something that will be done in 1.6.

I would like to be able to do this in 1.6, but it's not high on the
priority list. The way the code is now architected makes it less
complex than what it used to be, the real issue is to do it right. I
even hesitated to try to work on it after the server-side KA started
to work.

In practice, you must not send a request over an already established
connection if it's a non-idempotent request (eg: a POST) or if you're
not able to safely replay it yourself or are not certain the client
can replay it (eg: not the first request of a connection).

Replaying an already sent idempotent request could more or less be
achieved under some conditions. The worst problem we have to face
now is that once a buffer is emptied, it's realigned so we lose its
origin. But with an extra do-not-realign flag, we could possibly
keep the request in the buffer.

There are other issues with connection pools. Connections can die the
dirty way (eg: not used over a period and expire through a firewall).
So you have to test them periodically. Health checks are not necessarily
sufficient for this since they're too slow or will hammer the server.
One solution can be to limit a connection's max idle time to a few
seconds (it's a waste of resources otherwise anyway).

I think we'll get it in 1.6 but we need to be careful about what we
do.

Regards,
Willy




Feature request: redispatch-on-5xx

2014-06-23 Thread Dmitry Sivachenko
Hello!

One more thing which can be very useful in some setups: if backend server 
returns HTTP 5xx status code, it would be nice to have an ability to retry the 
same request on another server before reporting error to client (when you know 
for sure the same request can be sent multiple times without side effects).

Is it possible to make some configuration switch to allow such retries?

Thanks.


3rd regression : enough is enough!

2014-06-23 Thread Willy Tarreau
Hi guys,

today we got our 3rd regression caused by the client-side timeout changes
introduced in 1.5-dev25. And this one is a major one, causing FD leaks
and CPU spins when servers do not advertise a content-length and the
client does not respond to the FIN.  And the worst of it, is I have no
idea how to fix this at all.

I had that bitter feeling when doing these changes a month ago that
they were so much tricky that something was obviously going to break.
It has broken twice already and we could fix the issues. The second
time was quite harder, and we now see the effect of the regressions
and their workarounds spreading like an oil stain on paper, with
workarounds becoming more and more complex and less under control.

So in the end I have reverted all the patches responsible for these
regressions. The purpose of these patches was to report cD instead
of sD in the logs in the case where a client disappears during a
POST and haproxy has a shorter timeout than the server's.

I'll issue 1.5.1 shortly with the fix before everyone gets hit by busy
loops and lacks of file descriptors. If we find another way to do it
later, we'll try it in 1.6 and may consider backpoting to 1.5 if the
new solution is absolutely safe. But we're very far away from that
situation now.

I'm sorry for this mess just before the release, next time I'll be
stricter about such dangerous changes that I don't feel at ease with.

Willy




Re: Feature request: redispatch-on-5xx

2014-06-23 Thread Willy Tarreau
Hi Dmitry,

On Mon, Jun 23, 2014 at 06:16:28PM +0400, Dmitry Sivachenko wrote:
 Hello!
 
 One more thing which can be very useful in some setups: if backend server
 returns HTTP 5xx status code, it would be nice to have an ability to retry
 the same request on another server before reporting error to client (when you
 know for sure the same request can be sent multiple times without side
 effects).
 
 Is it possible to make some configuration switch to allow such retries?

No it is not because if the server has responded, it means that haproxy does
not have the request anymore. That's precisely one of the difficulties of
implementing server-side multiplexing.

Willy




Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Holger Just
Hi Bernhard,

Bernhard Weißhuhn wrote:
 When downloading the tar.gz, the chef client sends :80 as part of the host 
 header (which is legal from my understanding of the rfc).
 This header reliably results in a 404, whereas leaving out the port number 
 results in a successful download:

This happens because chef creates an unusual Host-header for its
remote_file resources right now. This currently doesn't only break for
haproxy.org but for many other services.

The issue is fixed in the chef master branch already [1]. To use the fix
right now, you can add a monkey-patch into one of your cookbooks which
patches chef's core with the fix [2]. The linked gist is a direct
translation of the patch. I use this currently in production if that
matters to you.

Regards,
Holger

[1] https://github.com/opscode/chef/pull/1471
[2] https://gist.github.com/meineerde/83e044c709b94358a616



ssl compression

2014-06-23 Thread Markus Rietzler

hi,
i am just in the process of reviewing/correcting/hardening my ssl setup.

haproxy uses ssl-termination on the frontend. this works very well.
i also use ssl on the backand - due to the setup of our application and apache 
config - this also works very well.

when i run a ssl check with globalsign or ssllabs i get a warning about  
CRIME/BEAST (in tls v 1.0)

in apache i can use

#don't use sslcompression, its unsecure
SSLCompression off

to switch off tls compression (because of beast/crime attack) with tls v1.0 and 
compression.
can i deactivate it in haproxy too?

thanxs

markus





Re: ssl compression

2014-06-23 Thread Vincent Bernat
 ❦ 23 juin 2014 18:14 +0200, Markus Rietzler w...@mrietzler.de :

 to switch off tls compression (because of beast/crime attack) with tls
 v1.0 and compression.  can i deactivate it in haproxy too?

haproxy disables SSL compression and there is no flag to enable
it. However, disabling SSL compression is not available in OpenSSL
0.9.8. Which version of OpenSSL are you using?
-- 
 /*
  * We used to try various strange things. Let's not.
  */
2.2.16 /usr/src/linux/fs/buffer.c



Re: ssl compression

2014-06-23 Thread Thomas Heil
Hi,

On 23.06.2014 18:32, Vincent Bernat wrote:
  ❦ 23 juin 2014 18:14 +0200, Markus Rietzler w...@mrietzler.de :

 to switch off tls compression (because of beast/crime attack) with tls
 v1.0 and compression.  can i deactivate it in haproxy too?
You should not add add a new thread to a existing one.
 haproxy disables SSL compression and there is no flag to enable
 it. However, disabling SSL compression is not available in OpenSSL
 0.9.8. Which version of OpenSSL are you using?

Please have a look at
http://blog.haproxy.com/2013/01/21/mitigating-the-ssl-beast-attack-using-the-aloha-load-balancer-haproxy/
If you need support for PFS too, then try  lines like this
--
frontend fe_443
bind :443 name https ssl crt /etc/haproxy/certs/mycert.pem
ciphers
ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK
mode http
--
and in the global section
--
tune.ssl.default-dh-param private key length but min 1024
--
After that check your site again.


cheers,
thomas



Re: Proxy Protocol v2 Implementations?

2014-06-23 Thread tyju tiui
Just FYI -- proxy protocol v1 and v2 decoding has recently landed in netty 
(https://github.com/netty/netty/commit/d7b2affe321edeaa51c1fa7bb3df9a5badb4728a)

Despite the original commit message v2 is actually supported (it was finished / 
tested after the haproxy-1.5-dev25 release). TLV's are currently ignored but 
otherwise it is a full client implementation.


On , Willy Tarreau w...@1wt.eu wrote:
Hi,




On Fri, Apr 18, 2014 at 07:22:17PM -0700, tyju tiui wrote:
 Hi,
 
 I'm curious if anyone knows of any proxy protocol v2 implementations (client
 or server)?
 I've written my implementation against the spec
 (http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt) but I realize now
 I have no way to really validate my code.

It's currently being discussed and the protocol is in the process of being
slightly extended. Todd Lyons has implementated it in Exim, and David S has
just posted a patch for haproxy to make forward progress. Feel free to
participate!

Regards,
Willy



Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Bernhard Weißhuhn
On 23.06.2014, at 16:50, Holger Just w...@meine-er.de wrote:

 [2] https://gist.github.com/meineerde/83e044c709b94358a616

Perfect, that worked like charm, Thank you!

Still, I think it's really the servers who are to blame for misbehaving. I just 
rechecked the following RFCs:

- http://tools.ietf.org/html/rfc7230#section-5.4
- http://tools.ietf.org/html/rfc7230#section-2.7.1
- http://tools.ietf.org/html/rfc7230#section-2.7.3
- http://tools.ietf.org/html/rfc3986#section-3.2.3
- http://tools.ietf.org/html/rfc3986#section-6.2.3

Rfc7231 even has an example with Host: server.example.com:80, although that 
is in the context of a connect request, admittedly.

Nowhere did I find any indication that a host-header with a default port should 
be illegal or treated on any way different from one without it.
Imho to support Postel's Law, both sides should be changed, client (sender in 
this case) conservative, server more liberal.

Anyways, I rest my case and thank you again for the quick workaround.

cheers,
  bkw




Re: Proxy Protocol v2 Implementations?

2014-06-23 Thread Willy Tarreau
Hi,

On Mon, Jun 23, 2014 at 10:32:53AM -0700, tyju tiui wrote:
 Just FYI -- proxy protocol v1 and v2 decoding has recently landed in netty
 (https://github.com/netty/netty/commit/d7b2affe321edeaa51c1fa7bb3df9a5badb4728a)

Great!

 Despite the original commit message v2 is actually supported (it was finished
 / tested after the haproxy-1.5-dev25 release). TLV's are currently ignored
 but otherwise it is a full client implementation.

That's exactly what haproxy does, and it was designed this way so that TLVs
are not mandatory. I think that most implementations will ignore them :-)

Cheers,
Willy




Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Willy Tarreau
Hi,

On Mon, Jun 23, 2014 at 02:08:57PM +0200, Bernhard Weißhuhn wrote:
 Hi,
 
 I noticed a strange behavior on the haproxy.org servers, which unfortunately 
 is being triggered trying to download the source from a chef-client.
 
 When downloading the tar.gz, the chef client sends :80 as part of the host
 header (which is legal from my understanding of the rfc).
 This header reliably results in a 404, whereas leaving out the port number
 results in a successful download:

OK I found why. The front server's haproxy used to send those :80 to the
local cached copy of the site because they didn't match a configured pattern.
I've fixed that now. I'll now have to check why the local cache returns a
404 :-)

Thanks for reporting this!
Willy




Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Willy Tarreau
On Mon, Jun 23, 2014 at 07:32:53PM +0200, Bernhard Weißhuhn wrote:
 On 23.06.2014, at 16:50, Holger Just w...@meine-er.de wrote:
 
  [2] https://gist.github.com/meineerde/83e044c709b94358a616
 
 Perfect, that worked like charm, Thank you!
 
 Still, I think it's really the servers who are to blame for misbehaving. I 
 just rechecked the following RFCs:
 
 - http://tools.ietf.org/html/rfc7230#section-5.4
 - http://tools.ietf.org/html/rfc7230#section-2.7.1
 - http://tools.ietf.org/html/rfc7230#section-2.7.3
 - http://tools.ietf.org/html/rfc3986#section-3.2.3
 - http://tools.ietf.org/html/rfc3986#section-6.2.3
 
 Rfc7231 even has an example with Host: server.example.com:80, although that
 is in the context of a connect request, admittedly.

I agree with you.

 Nowhere did I find any indication that a host-header with a default port
 should be illegal or treated on any way different from one without it.

It's just a matter of how the rules are written. On the front server, we
have an haproxy matching domain names using hdr_end(host) so it used to
only check for haproxy.org and so on, and would not match the trailing
:80.

 Imho to support Postel's Law, both sides should be changed, client (sender in
 this case) conservative, server more liberal.

Already done :-)

Willy




Re: problem w/ host header on haproxy.org download servers

2014-06-23 Thread Bernhard Weißhuhn
Confirmed, it works now.

Two fixes for one problem within hours - you guys are amazing!

cheers,
  bkw


On 23.06.2014, at 19:56, Willy Tarreau w...@1wt.eu wrote:

 Hi,
 
 On Mon, Jun 23, 2014 at 02:08:57PM +0200, Bernhard Weißhuhn wrote:
 Hi,
 
 I noticed a strange behavior on the haproxy.org servers, which unfortunately 
 is being triggered trying to download the source from a chef-client.
 
 When downloading the tar.gz, the chef client sends :80 as part of the host
 header (which is legal from my understanding of the rfc).
 This header reliably results in a 404, whereas leaving out the port number
 results in a successful download:
 
 OK I found why. The front server's haproxy used to send those :80 to the
 local cached copy of the site because they didn't match a configured pattern.
 I've fixed that now. I'll now have to check why the local cache returns a
 404 :-)
 
 Thanks for reporting this!
 Willy
 




Déménagement, emménagement : équipez-vous avec Bricoprive

2014-06-23 Thread Bricoprive

Your email client cannot read this email.
To view it online, please go here:
http://news.offre-pme.com/display.php?M=218901C=025d11f3dd50428cef8f43458ef4ec0bS=63L=26N=32


To stop receiving these
emails:http://news.offre-pme.com/unsubscribe.php?M=218901C=025d11f3dd50428cef8f43458ef4ec0bL=26N=63


Re: 3rd regression : enough is enough!

2014-06-23 Thread Patrick Hemmer

*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-06-23 10:23:44 EDT
*To: *haproxy@formilux.org
*CC: *Patrick Hemmer hapr...@stormcloud9.net, Rachel Chavez
rachel.chave...@gmail.com
*Subject: *3rd regression : enough is enough!

 Hi guys,

 today we got our 3rd regression caused by the client-side timeout changes
 introduced in 1.5-dev25. And this one is a major one, causing FD leaks
 and CPU spins when servers do not advertise a content-length and the
 client does not respond to the FIN.  And the worst of it, is I have no
 idea how to fix this at all.

 I had that bitter feeling when doing these changes a month ago that
 they were so much tricky that something was obviously going to break.
 It has broken twice already and we could fix the issues. The second
 time was quite harder, and we now see the effect of the regressions
 and their workarounds spreading like an oil stain on paper, with
 workarounds becoming more and more complex and less under control.

 So in the end I have reverted all the patches responsible for these
 regressions. The purpose of these patches was to report cD instead
 of sD in the logs in the case where a client disappears during a
 POST and haproxy has a shorter timeout than the server's.

 I'll issue 1.5.1 shortly with the fix before everyone gets hit by busy
 loops and lacks of file descriptors. If we find another way to do it
 later, we'll try it in 1.6 and may consider backpoting to 1.5 if the
 new solution is absolutely safe. But we're very far away from that
 situation now.

 I'm sorry for this mess just before the release, next time I'll be
 stricter about such dangerous changes that I don't feel at ease with.

 Willy



This is unfortunate. I'm guessing a lot of the issue was in ensuring the
client timeout was observed. Would it at least be possible to change the
response, so that even if the server timeout is what kills the request,
that the client gets sent back a 408 instead of a 503?

-Patrick


Re: Email Alert Proposal

2014-06-23 Thread Delta Yeh
I think invoke an external command on alert would be better, just like
what  external-check do .


2014-06-24 8:15 GMT+08:00 Simon Horman ho...@verge.net.au:

 Hi Willy,

 Malcolm has asked me to open a discussion with you regarding adding
 email alerts to haproxy and that is the purpose of this email.

 In essence the motivation is to provide a lightweight email alert
 feature that may be used in situations where a full-blown monitoring
 system is not in use.

 There is some discussion of this topic and several solutions,
 including patches to haproxy, on the loadbalancer.org log.


 http://blog.loadbalancer.org/3-ways-to-send-haproxy-health-check-email-alerts/

 Would you be open to including such a feature in haproxy?

 If so I had it in mind to have haproxy send emails using the sendmail
 command,
 a variation of the mailx implementation at the link above, avoiding the
 need to implement an SMTP client.

 I was thinking it could be configured using directives like the following,
 borrowing ideas from my recent external-agent patch.

 global
 email-alert

 listen ...

 option email-alert
 email-alert command sendmail
 email-alert path/usr/sbin:/usr/lib
 email-alert fromfrom@x.y.z
 email-alert to  to@x.y.z
 email-alert cc  cc1@x.y.z, cc2@x.y.z
 email-alert bcc bcc@x.y.z
 email-alert subject Loadbalancer alert
 email-alert custom-header X-Custom: foo

 It might be nice to allow the use of printf style directives in
 the subject to allow it to include the name of the proxy and other
 useful information. I expect that different users have different needs
 there.




Re: Working example of url32+src

2014-06-23 Thread Andrew Kroenert
On Wed, Jun 18, 2014 at 5:51 PM, Baptiste bed...@gmail.com wrote:

 On Wed, Jun 18, 2014 at 8:09 AM, Andrew Kroenert and...@thek.ro wrote:
  Hey Guys,
 
  Im trying to tarpit based on Unique IP and specific URL. I started with
 the
  following:
 
  listen  web
  ...
 # Track IP over 60sec, if http_req rate greater than 20 AND
  page.html, send
 # to new backend with tarpit only.
  stick-table type ip size 1m expire 60s store
 gpc0,http_req_rate(60s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { src_get_gpc0 gt 0 }
 
  acl ratelimiteIP src_http_req_rate ge 20
  acl showPage path_end page.html
  use_backend web-ratelimit if ratelimiteIP showPage
 
  backend web-ratelimit
  mode http
  fullconn 500
 
  timeout tarpit 5s
  reqitarpit .
 
 
  The above example works to a degree, but not what I was hoping for. I am
  only sending to a new backend to easily see results in the stats web
 page.
 
  The above example tracks all IP requests, and if the url matches
 page.html
  it blocks it (Example: 100x req to index.html and 1 req to page.html
 would
  trigger) . I am hoping to track ONLY ip addresses going to a specific
 URL,
  not all in general.
 
  I then moved onto the following example:
 
  listen  web
  ...
  acl showPage path_end page.html
  acl ratelimitIP sc1_get_gpc0 ge 0
  stick-table type binary len 20 size 500 store gpc0
 
  tcp-request content track-sc1  url32+src if showPage
  use_backend web-ratelimit if ratelimitIP
 
  backend web-ratelimit
  mode http
  fullconn 500
 
  timeout tarpit 5s
  reqitarpit .
 
  But this doesnt seem to track them correctly.
 
  Anyone have any pointers or a working config on url32+src? would be
 greatly
  appreciated?
 
  Thanks
 
  Andrew
 
 

 Hi Andrew,

 You pick up your example from the blog post related to brute force
 protection.
 In such case you just want to protect a particular URL from being
 hitting too much.
 URL:
 http://blog.haproxy.com/2013/04/26/wordpress-cms-brute-force-protection-with-haproxy/

 Now, if you explain us your needs, we may be able to help you.

 Baptiste


Thanks Baptiste,

I had followed the article but I thought it was either backend OR frontend,
not both.

I have configured both and it is working as expected, Once I configured the
peers section.

Thanks Again.


How can I rewrite based on path?

2014-06-23 Thread Jeffrey Scott Flesher Gmail
I have an acl rule to see if path begins with /ww as in
domain.tdl/ww/en... 
acl has_ww_uri path_beg -i /ww 
If it is just the domain.tdl, I want to rewrite it to /ww
I also have static content I do not want to rename, so I added this rule
acl url_static path_end .gif .png .jpg .css .js .pdf .m4v

I want to do something like:
!has_ww_uri !url_static reqirep ^([^\ :]*)\ /(.*) \1\ /ww\2

But this does not work, does anyone have any idea how I can do this?

Do I have to create a backend to do the rewrite?

use_backend needsrewrite if !has_ww_uri !url_static
backend needsrewrite
reqirep ^([^\ :]*)\ /(.*) \1\ /ww\2
or this
server Backend1 10.0.0.1:80 redir http:// www.example.com/backend1
...
Because I have more logic that this would bypass, like all my checks to
see what servers are up, so I would have to have more backends defined
for this to work, so I thought I would ask first for an easier way.

Is there a way to modify this to work:
redirect location http://domain.tdl/ww code 301 if !has_ww_uri
so I do not have to use a full url, since I might have many on this
account, so its not hard coded:
redirect location /ww code 301 if !has_ww_uri

I do not have Apache Loaded, so I can not use mod_rewrite, this is a Wt
Application running httpd.

Thanks


Offres spéciales: Chaussures LAMBDA - BIOTEE

2014-06-23 Thread CGR GOLF
Si ce message ne s'affiche pas correctement consultez-le en ligne 





Jusqu'à -22% sur les chaussures LAMBDA

 



 Le Tee Écolo




  - 100% Biodégradable 

  - 100% Naturel

  - Non toxique 

  - Super Résistant 


5 paquets de 10 tees achetés
= 5 PAQUETS OFFERTS 
PRIX SPÉCIAL :
22.00€ 44.00€
Soit 2.20€ le paquet de 10 tees  5 paquets de 15 tees achetés
= 5 PAQUETS OFFERTS 
PRIX SPÉCIAL :
28.00€ 56.00€
Soit2.80€ le paquet de 15 tees 






 Recevez nos NewsletterSuivez-nous sur Facebook 

Se désinscrire de cette newsletter



Re: Email Alert Proposal

2014-06-23 Thread Willy Tarreau
Hi Simon,

On Tue, Jun 24, 2014 at 09:15:13AM +0900, Simon Horman wrote:
 Hi Willy,
 
 Malcolm has asked me to open a discussion with you regarding adding
 email alerts to haproxy and that is the purpose of this email.
 
 In essence the motivation is to provide a lightweight email alert
 feature that may be used in situations where a full-blown monitoring
 system is not in use.
 
 There is some discussion of this topic and several solutions,
 including patches to haproxy, on the loadbalancer.org log.
 
 http://blog.loadbalancer.org/3-ways-to-send-haproxy-health-check-email-alerts/
 
 Would you be open to including such a feature in haproxy?
 
 If so I had it in mind to have haproxy send emails using the sendmail command,
 a variation of the mailx implementation at the link above, avoiding the
 need to implement an SMTP client.
 
 I was thinking it could be configured using directives like the following,
 borrowing ideas from my recent external-agent patch.
 
 global
   email-alert
 
 listen ...
 
   option email-alert
   email-alert command sendmail
   email-alert path/usr/sbin:/usr/lib
   email-alert fromfrom@x.y.z
   email-alert to  to@x.y.z
   email-alert cc  cc1@x.y.z, cc2@x.y.z
   email-alert bcc bcc@x.y.z
   email-alert subject Loadbalancer alert
   email-alert custom-header X-Custom: foo
 
 It might be nice to allow the use of printf style directives in
 the subject to allow it to include the name of the proxy and other
 useful information. I expect that different users have different needs
 there.

We had such an idea in the past, however the principle was to use the
address of a smart relay host. We cannot use a command because the process
is supposed to be chrooted. Also, in my opinion the SMTP relay should be
per section (ie: supported in the defaults section) because in shared
environments, customers want to use a different gateway and e-mail
settings. In fact in the ALOHA we have implemented a daemon which watches
the unix socket to send e-mails because by then it was too much work to
implement it natively. Now it should be much simpler.

I was just wondering whether we should not go slightly further and support
mailer sections just like we have peers. I'd like to do the same for DNS
resolving later and I think it makes the configs more understandable, and
more flexible (eg: later we can think about having multiple mail relays).

We also want to have the ability to define what events can result in an
e-mail alert. For example, I know some hosting providers who absolutely
want to know when a service is operational again (so that they can drive
back home), while others absolutely refuse to be spammed with such info.

Sections also make it possible to add extra settings for the mail relays,
for example SSL + a certificate. Or maybe some default from/to/...

Maybe we should bring this discussion to the mailing list so that other
users can suggest some features we don't have in mind ?

Thanks,
Willy




Re: 3rd regression : enough is enough!

2014-06-23 Thread Willy Tarreau
Hi Patrick,

On Mon, Jun 23, 2014 at 09:30:11PM -0400, Patrick Hemmer wrote:
 This is unfortunate. I'm guessing a lot of the issue was in ensuring the
 client timeout was observed. Would it at least be possible to change the
 response, so that even if the server timeout is what kills the request,
 that the client gets sent back a 408 instead of a 503?

For now I have no idea. All the mess came from the awful changes that
were needed to ignore the server-side timeout and pretend it came from
the client despite the server triggering first. This required to mess
up with these events in a very dangerous way :-(

So right now I'd suggest to try with a shorter client timeout than the
server timeout. I can try to see how to better *report* this specific
event if needed, but I don't want to put the brown paper bag on
timeouts anymore.

Regards,
Willy