Re: ssl performance regression in version 1.6

2016-01-22 Thread Willy Tarreau
Hi Gary,

On Fri, Jan 22, 2016 at 06:04:07PM -0800, Gary Barrueto wrote:
> > Do you have a way to ensure the same algorithms are
> > negociated on both versions ? I've run a diff between 1.5.14 and 1.6.3
> > regarding SSL, and it's very limited. Most of the changes affect OpenSSL
> > 1.0.2 (you're on 1.0.1), or automatic DH params and in your case they're
> > already forced.
> >
> 
> That what I'm exactly doing now by forcing the client to only negotiate the
> specific protocol/cipher. The largest difference we see is
> with ECDHE-RSA-AES256-SHA384/TlS1.2+keepalive 16% slower then compared to
> 1.5.14.

OK thank you.

> > There's something though, I'm seeing SSL_MODE_SMALL_BUFFERS being added
> > in 1.6. It only comes with a patch and is not standard, it allows openssl
> > to use less memory for small messages. Could you please run the following
> > command to see what SSL_MODE_* options are defined on your system :
> >
> >$ grep -rF SSL_MODE_ /usr/include/openssl/
> >
> > ???Here is the output from the command.
> 
> gary:~$ grep -rF SSL_MODE_ /usr/include/openssl/
> /usr/include/openssl/ssl.h:#define SSL_MODE_ENABLE_PARTIAL_WRITE
> 0x0001L
> /usr/include/openssl/ssl.h:#define SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
> 0x0002L
> /usr/include/openssl/ssl.h:#define SSL_MODE_AUTO_RETRY 0x0004L
> /usr/include/openssl/ssl.h:#define SSL_MODE_NO_AUTO_CHAIN 0x0008L
> /usr/include/openssl/ssl.h:#define SSL_MODE_RELEASE_BUFFERS 0x0010L
> /usr/include/openssl/ssl.h:#define SSL_MODE_SEND_FALLBACK_SCSV 0x0080L

So that's totally standard and same as what we have on other systems

> > > I have the 'haproxy -vv' output and hardware specs listed below. Also
> > > attaching the haproxy/nginx configs being used.
> >
> > Thank you, I'm really not seeing a
> > ??
> > nything suspicious there. There's
> > something that you should definitely do if you're running on a kernel 3.9
> > or later, which is to use as many "bind" lines per frontend as you have
> > processes. That makes use of the kernel's SO_REUSEPORT mechanism to balance
> > the load across all processes much more evenly than when there's a single
> > queue. It might be possible that your load is imbalanced right now.
> >
> >
> ???I've just tested with a 3.13 kernel (backported from ubuntu 14.04/trusty)
> and we see near same results.???

OK.

> ???Here is a small sample of what we've seen with a 1m payload.
> 
> cipher protocol mode reqs/sec reqs/sec % difference haproxy 1.5.14 haproxy 
> 1.6.3
> ECDHE-RSA-AES256-SHA384 TLS1.2 non-keepalive 208.92 184.25 -13.39%
> ECDHE-RSA-AES256-SHA384 TLS1.2 keepalive 224.76 192.12 -16.99%
> ECDHE-RSA-AES128-SHA256 TLS1.2 keepalive 174.91 159.67 -9.54%
> ADH-AES128-SHA TLS1.1 keepalive 363.38 336.24 -8.07%

OK so in short, in the worst case the performance dropped from 2 Gbps
to 1.7 Gbps. That's particularly low for a multi-process config. The
typical performance you should get on AES256 and keep-alive is around
3-5 Gbps per core depending on the CPU's frequency.

Could you possibly run the same test in a single-process config ? Please
just run the ECDHE-RSA-AES256-SHA384-keepalive test since it's the most
visible one.

Also another test worth doing is to start a second load generator (I
don't know if you have another machine available) to ensure that in
no way there is anything in the middle limiting the performance,
including the load generator itself. Because quite frankly, these
numbers are suspiciously low. I've reached 19 Gbps of SSL traffic
in keep-alive with 1M objects on a quad-core. I'm not saying that
you should have seen 80 Gbps, but at least you should have seen
much more than 2 Gbps...

Regards,
Willy




Re: ssl performance regression in version 1.6

2016-01-22 Thread Gary Barrueto
On Thu, Jan 21, 2016 at 11:11 AM, Willy Tarreau  wrote:

> Hi Gary,
>
> On Thu, Jan 07, 2016 at 09:48:59PM -0800, Gary Barrueto wrote:
> > I've been testing ssl with version 1.5.14 and 1.6.3. I noticed that with
> > larger files (1mb) reqs/sec is on average 7% slower and as much as 16%
> > depending on the cipher when using version 1.6.3 compared to 1.5.14.
> > Smaller requests (4k files) are not affected. Haproxy is using the exact
> > same config for each version and is using nginx on localhost to serve the
> > static files. We're getting our stats from running wrk benchmark tool
> > ??? ???
> > which is running from another server with the same hardware spec which is
> > connected on the same switch.
> > ??? ???
> > Any ideas what may be causing this?
>
> Unfortunately not. Do you have a way to ensure the same algorithms are
> negociated on both versions ? I've run a diff between 1.5.14 and 1.6.3
> regarding SSL, and it's very limited. Most of the changes affect OpenSSL
> 1.0.2 (you're on 1.0.1), or automatic DH params and in your case they're
> already forced.
>

That what I'm exactly doing now by forcing the client to only negotiate the
specific protocol/cipher. The largest difference we see is
with ECDHE-RSA-AES256-SHA384/TlS1.2+keepalive 16% slower then compared to
1.5.14.

​


>
> There's something though, I'm seeing SSL_MODE_SMALL_BUFFERS being added
> in 1.6. It only comes with a patch and is not standard, it allows openssl
> to use less memory for small messages. Could you please run the following
> command to see what SSL_MODE_* options are defined on your system :
>
>$ grep -rF SSL_MODE_ /usr/include/openssl/
>
> ​Here is the output from the command.

gary:~$ grep -rF SSL_MODE_ /usr/include/openssl/
/usr/include/openssl/ssl.h:#define SSL_MODE_ENABLE_PARTIAL_WRITE
0x0001L
/usr/include/openssl/ssl.h:#define SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER
0x0002L
/usr/include/openssl/ssl.h:#define SSL_MODE_AUTO_RETRY 0x0004L
/usr/include/openssl/ssl.h:#define SSL_MODE_NO_AUTO_CHAIN 0x0008L
/usr/include/openssl/ssl.h:#define SSL_MODE_RELEASE_BUFFERS 0x0010L
/usr/include/openssl/ssl.h:#define SSL_MODE_SEND_FALLBACK_SCSV 0x0080L


> > I have the 'haproxy -vv' output and hardware specs listed below. Also
> > attaching the haproxy/nginx configs being used.
>
> Thank you, I'm really not seeing a
> ​​
> nything suspicious there. There's
> something that you should definitely do if you're running on a kernel 3.9
> or later, which is to use as many "bind" lines per frontend as you have
> processes. That makes use of the kernel's SO_REUSEPORT mechanism to balance
> the load across all processes much more evenly than when there's a single
> queue. It might be possible that your load is imbalanced right now.
>
>
​I've just tested with a 3.13 kernel (backported from ubuntu 14.04/trusty)
and we see near same results.​
​
​

> > Other then that version 1.6.3 seems to be preforming well on smaller
> > requests. Its the larger requests we're worried about as thats the size
> of
> > the majority of the traffic we want on ssl.
>
> That's what puzzles me. Usually the SSL performance issues are more visible
> on small objects than large ones because they're caused by larger keys or
> more costly protocols. Here it would imply either more buffer exchnages,
> or more expensive symmetric crypto.
>
> Just out of curiosity, what is the order of magnitude of the numbers you're
> observing ?
>
> Regards,
> willy
>
> ​Here is a small sample of what we've seen with a 1m payload.

cipher protocol mode reqs/sec reqs/sec % difference



haproxy 1.5.14 haproxy 1.6.3
ECDHE-RSA-AES256-SHA384 TLS1.2 non-keepalive 208.92 184.25 -13.39%
ECDHE-RSA-AES256-SHA384 TLS1.2 keepalive 224.76 192.12 -16.99%
ECDHE-RSA-AES128-SHA256 TLS1.2 keepalive 174.91 159.67 -9.54%
ADH-AES128-SHA TLS1.1 keepalive 363.38 336.24 -8.07%


-gary


Re: http_date converter gives wrong date

2016-01-22 Thread Gregor Kovač
Hi!

I've tried it and it works as expected. Now I get "Expires: Fri, 22 Jan
2016 17:43:38 GMT"

Best regards,
Kovi

2016-01-22 19:35 GMT+01:00 Holger Just :

> Hi,
>
> Gregor Kovač wrote:
> > The problem I have here is that Expires should be Friday and not
> Saturday.
>
> This is indeed a bug in HAProxy as it assumes the weekday to start on
> Monday instead of Sunday. The attached patch fixes this issue.
>
> The patch applies cleanly against master and 1.6.
>
>
> Regards,
> Holger
>



-- 
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
|  In A World Without Fences Who Needs Gates?  |
|  Experience Linux.   |
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~


Re: keep-alive problems and best practices question

2016-01-22 Thread Bryan Talbot
On Fri, Jan 22, 2016 at 3:18 AM, Piotr Rybicki  wrote:

>
> Found it. Seems like this issue:
>
> http://www.serverphorums.com/read.php?10,1341691
>
>
>> haproxy 1.5.15, linux 3.18.24
>>>
>>
>>


This issue was fixed in 1.5 with 3de8e7ab8 in November but there hasn't
been a release with it yet.

1.6.3 has the fix already.

Maybe it's time for 1.5.16?

-Bryan


Re: 1.6.3 stats

2016-01-22 Thread PiBa-NL

Hi,
Not sure if i interpret this right, but be careful..

Op 22-1-2016 om 21:41 schreef shouldbe q931:

Then I moved "stats enable" and "stats auth" lines to defaults, and
added "stats admin if TRUE" to each frontend and backend that I want
to be managed.
I think if you have 'stats enable' in the defaults, effectively you have 
a stats page on every frontend that exists in the configuration. Which 
would probably be reachable under http://yourdomain.tld/haproxy?stats , 
possibly even having those admin permissions.


My two cents.. Which i did not verify..
Regards,
PiBa-NL




Re: 1.6.3 stats

2016-01-22 Thread Cyril Bonté

Hi,

Le 22/01/2016 21:41, shouldbe q931 a écrit :

Hi,

Because I want get Lua working (for letsencypt) I wanted to move from
1.5 to 1.6 (built 1.6.3 from git)

In 1.5 I had a very simple stats config

listen  stats :7000
 stats   enable
 stats   uri /
 stats   auth user:pass
 stats   admin if TRUE

This failed under 1.6


The only thing you had to do is to move the implicit "bind" on the 
"listen" line in a dedicated line (this syntax is now forbidden to 
prevent issues, when copy/pasting some examples and working with ssl for 
example) :


listen  stats
 bind :7000
 stats   enable
 stats   uri /
 stats   auth user:pass
 stats   admin if TRUE



Reading the docs at http://cbonte.github.io/haproxy-dconv/configuration-1.6.html

First I created a new listener

listen stats
 bind *:7000
 stats uri /

Then I moved "stats enable" and "stats auth" lines to defaults, and
added "stats admin if TRUE" to each frontend and backend that I want
to be managed.


That's not how "stats admin" works : it has to be on the statistics section.



When I browse to :7000 (only open internally) I do not get an
authentication prompt, I can see the stats, but the manage options
have gone, presumably because of the lack of authentication

I presume I'm missing something obvious, but after several hours of
re-reading, my progression has stopped :-(

Could somebody point me in the right direction.

-
local@haproxy-2:~$ haproxy -vv
HA-Proxy version 1.6.3 2015/12/27
Copyright 2000-2015 Willy Tarreau 

Build options :
   TARGET  = linux2628
   CPU = native
   CC  = gcc
   CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
-Wdeclaration-after-statement
   OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=yes USE_PCRE=1

Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.2
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
-

I also noticed very small "bug", under External resources on the top
left of the stats page, it lists "Updates (v1.5), should this be
changed to 1.6 ?


Cheers




--
Cyril Bonté



1.6.3 stats

2016-01-22 Thread shouldbe q931
Hi,

Because I want get Lua working (for letsencypt) I wanted to move from
1.5 to 1.6 (built 1.6.3 from git)

In 1.5 I had a very simple stats config

listen  stats :7000
stats   enable
stats   uri /
stats   auth user:pass
stats   admin if TRUE

This failed under 1.6

Reading the docs at http://cbonte.github.io/haproxy-dconv/configuration-1.6.html

First I created a new listener

listen stats
bind *:7000
stats uri /

Then I moved "stats enable" and "stats auth" lines to defaults, and
added "stats admin if TRUE" to each frontend and backend that I want
to be managed.

When I browse to :7000 (only open internally) I do not get an
authentication prompt, I can see the stats, but the manage options
have gone, presumably because of the lack of authentication

I presume I'm missing something obvious, but after several hours of
re-reading, my progression has stopped :-(

Could somebody point me in the right direction.

-
local@haproxy-2:~$ haproxy -vv
HA-Proxy version 1.6.3 2015/12/27
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
-Wdeclaration-after-statement
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=yes USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.2
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
-

I also noticed very small "bug", under External resources on the top
left of the stats page, it lists "Updates (v1.5), should this be
changed to 1.6 ?


Cheers



Re: [PATCH] BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week

2016-01-22 Thread Willy Tarreau
Hi Cyril,

On Fri, Jan 22, 2016 at 07:40:28PM +0100, Cyril Bonté wrote:
> Gregor Kova?? reported that http_date() did not return the right day of the
> week. For example "Sat, 22 Jan 2016 17:43:38 GMT" instead of "Fri, 22 Jan
> 2016 17:43:38 GMT". Indeed, gmtime() returns a 'struct tm' result, where
> tm_wday begins on Sunday, whereas the code assumed it began on Monday.

Ah sorry for this, I must have been more stupid than average when doing this!

> This patch must be backported to haproxy 1.5 and 1.6.

I've merged it into 1.7-dev and will backport it ASAP.

Thanks!
Willy




[PATCH] BUG/MEDIUM: sample: http_date() doesn't provide the right day of the week

2016-01-22 Thread Cyril Bonté
Gregor Kovač reported that http_date() did not return the right day of the
week. For example "Sat, 22 Jan 2016 17:43:38 GMT" instead of "Fri, 22 Jan
2016 17:43:38 GMT". Indeed, gmtime() returns a 'struct tm' result, where
tm_wday begins on Sunday, whereas the code assumed it began on Monday.

This patch must be backported to haproxy 1.5 and 1.6.
---
 src/proto_http.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index e362a96..2f76afe 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -11973,7 +11973,7 @@ int val_hdr(struct arg *arg, char **err_msg)
  */
 static int sample_conv_http_date(const struct arg *args, struct sample *smp, 
void *private)
 {
-   const char day[7][4] = { "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", 
"Sun" };
+   const char day[7][4] = { "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", 
"Sat" };
const char mon[12][4] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", 
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec" };
struct chunk *temp;
struct tm *tm;
-- 
2.7.0




Re: http_date converter gives wrong date

2016-01-22 Thread Holger Just
Hi,

Gregor Kovač wrote:
> The problem I have here is that Expires should be Friday and not Saturday.

This is indeed a bug in HAProxy as it assumes the weekday to start on
Monday instead of Sunday. The attached patch fixes this issue.

The patch applies cleanly against master and 1.6.


Regards,
Holger
From 32cf0c931f0c4bfd3ea687aa7399e4f95626b6ad Mon Sep 17 00:00:00 2001
From: Holger Just 
Date: Fri, 22 Jan 2016 19:23:43 +0100
Subject: [PATCH] BUG/MINOR: Correct weekdays in http_date converter

Days of the week as returned by gmtime(3) are defined as the number of
days since Sunday, in the range 0 to 6.
---
 src/proto_http.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index e362a96..2f76afe 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -11973,7 +11973,7 @@ int val_hdr(struct arg *arg, char **err_msg)
  */
 static int sample_conv_http_date(const struct arg *args, struct sample *smp, 
void *private)
 {
-   const char day[7][4] = { "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", 
"Sun" };
+   const char day[7][4] = { "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", 
"Sat" };
const char mon[12][4] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", 
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec" };
struct chunk *temp;
struct tm *tm;
-- 
2.6.4



Re: http_date converter gives wrong date

2016-01-22 Thread Cyril Bonté

Hi Gregor,

Le 22/01/2016 18:01, Gregor Kovač a écrit :

Hi!

I've been using HAProxy 1.6.3 on XUbuntu 14.04 64.bit.
In my haproxy.conf I have:
(...)
 http-response set-header Expires %[date,http_date]
(...)
I test this using curl:
curl --cookie "USER_TOKEN=2345678901;BLA=ena;BILUMINA_SERVERID=3" --user
gregor:kovi http://localhost:10001

The HTTP response I get back:
HTTP/1.1 200 OK
content-length: 40
content-type: text/plain
server_name: curl/7.35.0
Expires: Sat, 22 Jan 2016 17:43:38 GMT

Hello HTTP World!

The problem I have here is that Expires should be Friday and not Saturday.


Confirmed.
The bug is easy to fix : gmtime() returns the day of the week as an int 
from 0 to 6, where Sunday = 0, whereas the code in haproxy begins on 
Monday :

const char day[7][4] = { "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", 
"Sun" };

I'll provide a patch soon.


--
Cyril Bonté



Re: Reloading haproxy without dropping connections

2016-01-22 Thread David Martin
We use the iptables syn drop method, works fine; the additional 1 sec
in response time for the tiny number of new connections doesn't bother
us as we are not restarting multiple time per hour.

On Fri, Jan 22, 2016 at 11:01 AM, CJ Ess  wrote:
> The yelp solution I can't do because it requires a newer kernel then I have
> access to, but the unbounce solution is interesting, I may be able to work
> up something around that.
>
>
>
> On Fri, Jan 22, 2016 at 4:07 AM, Pedro Mata-Mouros
>  wrote:
>>
>> Hi,
>>
>> Haven’t had the chance to implement this yet, but maybe these links can
>> get you started:
>>
>>
>> http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
>> http://inside.unbounce.com/product-dev/haproxy-reloads/
>>
>> It’d be cool to have a sort of “officially endorsed” way of achieving
>> this.
>>
>> Best,
>>
>> Pedro.
>>
>>
>>
>> On 22 Jan 2016, at 00:38, CJ Ess  wrote:
>>
>> One of our sore points with HAProxy has been that when we do a reload
>> there is a ~100ms gap where neither the old or new HAproxy processes accept
>> any requests. See attached graphs. I assume that during this time any
>> connections received to the port are dropped. Is there anything we can do so
>> that the old process keeps accepting requests until the new process is
>> completely initialized and starts accepting connections on its own?
>>
>> I've looked into fencing the restart with iptable commands to blackhole
>> TCP SYNs, and I've looked into the huptime utility though I'm not sure
>> overloading libc functions is the best approach long term. Any other
>> solutions?
>>
>>
>> 
>> 
>>
>>
>>
>



Re: Reloading haproxy without dropping connections

2016-01-22 Thread CJ Ess
The yelp solution I can't do because it requires a newer kernel then I have
access to, but the unbounce solution is interesting, I may be able to work
up something around that.



On Fri, Jan 22, 2016 at 4:07 AM, Pedro Mata-Mouros  wrote:

> Hi,
>
> Haven’t had the chance to implement this yet, but maybe these links can
> get you started:
>
>
> http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
> http://inside.unbounce.com/product-dev/haproxy-reloads/
>
> It’d be cool to have a sort of “officially endorsed” way of achieving this.
>
> Best,
>
> Pedro.
>
>
>
> On 22 Jan 2016, at 00:38, CJ Ess  wrote:
>
> One of our sore points with HAProxy has been that when we do a reload
> there is a ~100ms gap where neither the old or new HAproxy processes accept
> any requests. See attached graphs. I assume that during this time any
> connections received to the port are dropped. Is there anything we can do
> so that the old process keeps accepting requests until the new process is
> completely initialized and starts accepting connections on its own?
>
> I've looked into fencing the restart with iptable commands to blackhole
> TCP SYNs, and I've looked into the huptime utility though I'm not sure
> overloading libc functions is the best approach long term. Any other
> solutions?
>
>
> 
> 
>
>
>
>


http_date converter gives wrong date

2016-01-22 Thread Gregor Kovač
Hi!

I've been using HAProxy 1.6.3 on XUbuntu 14.04 64.bit.
In my haproxy.conf I have:
listen proxy_http
bind 127.0.0.1:10001
mode http
option httplog
log-tag haproxy-proxy_http_tag
acl auth_ok http_auth(L1)
acl local_net src 192.168.0.0/24
http-request auth realm 'proxy_http realm' if !auth_ok
http-request allow if local_net auth_ok
http-request deny if !local_net !auth_ok
http-request use-service lua.hello-world-http
http-response set-header Expires %[date,http_date]
capture cookie USER_TOKEN len 20
capture request header User-Agent len 64

lua.hello-world-http in defined in lua file like:
core.register_service("hello-world-http", "http", function(applet)
 local response = "Hello HTTP World!\n"
applet:set_status(200)
applet:add_header("content-length", string.len(response))
applet:add_header("content-type", "text/plain")
applet:add_header("server_name", applet.headers["user-agent"][0])
applet:start_response()
applet:send(response)
end)

I test this using curl:
curl --cookie "USER_TOKEN=2345678901;BLA=ena;BILUMINA_SERVERID=3" --user
gregor:kovi http://localhost:10001

The HTTP response I get back:
HTTP/1.1 200 OK
content-length: 40
content-type: text/plain
server_name: curl/7.35.0
Expires: Sat, 22 Jan 2016 17:43:38 GMT

Hello HTTP World!

The problem I have here is that Expires should be Friday and not Saturday.

Best regards,
Kovi

-- 
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
|  In A World Without Fences Who Needs Gates?  |
|  Experience Linux.   |
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~


Help you save much cost(Horisung LED Panel Light)

2016-01-22 Thread Daniel
Hello,
Good day.
This is  Daniel from Horisung Lighting.
Glad to learn that you have been offering LED lights to your customers.

As a professional manufacturer of LED lighting products, we can do something for you. 

We are able to produce the panel lights at avrious wattage coming in diferent sizes at competitive price 

to meet your specific requirements and differentiate you from your rivals.(as shown in the following picture)
To enable you to learn more about our products, may I send you our price list and samples for evaluation?

_


 Best regards

 Daniel

Horisung Lighting Technology Co., Ltd.

Add.: NO.21,HaiZhou Road, GuZhen Town, ZhongShan City, China

+86 184 760 101 12

www.horisung.com


 



 

点击这里取消订阅

HAProxy Discussion

2016-01-22 Thread Ravi Kiran Aita
Hi,

Please include me for the discussion mail list


Regards,
Ravikiran Aita | Principal Software Engineer
EiQ Networks, Inc.
o: 91.402.311.6680 | p: 91.9885820680
[cid:image001.png@01D11C9D.AF5CC1F0]
"This email is intended only for the use of the individual or entity named 
above and may contain information that is confidential and privileged. If you 
are not the intended recipient, you are hereby notified that any dissemination, 
distribution or copying of the email is strictly prohibited. If you have 
received this email in error, please destroy the original message."



HAP, Modsecurity and SSL

2016-01-22 Thread Phil Daws
Hello: 

Are any of you running an architecture like 
http://blog.haproxy.com/2012/10/12/scalable-waf-protection-with-haproxy-and-apache-with-modsecurity/
 but with SSL termination in the mix ? Would be interested to hear how you have 
done it please. 

Thanks, Phil 


Re: keep-alive problems and best practices question

2016-01-22 Thread Piotr Rybicki



W dniu 2016-01-21 o 21:10, Willy Tarreau pisze:

Hi Piotr,

On Fri, Jan 15, 2016 at 02:04:30PM +0100, Piotr Rybicki wrote:

Hi Guys!

I've recently discovered odd behaviour regarding keep-alives in haproxy
and sites with HTML5/js/ajax ultra-fancy stuff. Some (random) requests
are not loaded (like images). Disabling keep-alive in haproxy solves
this issue. 'Normal' sites seems to be working fine with keepalives on.

Is there anyone, experiencing the same issue?


I'm not aware of any such report. Does it happen a lot or just once in
a while ?


Only when site uses JS/HTML5 stuff. Issue is reproducible.




I believe - timeouts should be the same, although timeouted keep-alive
conn should be reconnected transparently.


The client automatically retries a failed request over a keep-alive
connection, that's mandated by the spec since nobody can predict when
the middle will close, and that due to network latency, both the client
and the server may decide to close at the same time and each of them
will receive the other one's notification after their action.


I recall someone probably reported similar issue, but can't find that in
archives.


I don't have this in mind. We had many issues during 1.5-dev and these
versions were used a lot in production due to SSL, so maybe it was one of
them.


Found it. Seems like this issue:

http://www.serverphorums.com/read.php?10,1341691




haproxy 1.5.15, linux 3.18.24


Have you checked if this happens more (or only) with a specific browser ?
Have you tried to increase the keep-alive timeout to insane values just
for a test (ie: at least the test session's duration) ? Also, could you
check if you're seeing it more in HTTP or HTTPS ?


Tried FireFox and IE - the same. HTTP/HTTPS doesn't make any difference.

I have theory, that JS code actually makes this problematic 
requests/responses (not 'plain' browser itself).


Disabling keep-alive solves this problem.

Best regards
Piotr Rybicki



Re: Set State to DRAIN vs set weight 0

2016-01-22 Thread Willy Tarreau
Hi Alex,

On Fri, Jan 22, 2016 at 11:32:14AM +0200, Alex wrote:
> Hi,
> 
> Thank you for the answer, this is very helpful.
> So to sum up my understanding, usually drain is used in operations by
> setting the server in this state for a specific amount of time and then put
> in maintenance state. So:
> 
> 1. If I have an automated operation process which sets a server state to
> drain, which means the server will process only current connections and new
> sticky connections if any, I should not have any haproxy reload until the
> state is changed to maintenance since the drain state will be lost even if
> using server-state-file feature.
> 
> 2. However, the automated operation process can use set weight 0 which
> means the server will process only current connections and new sticky
> connections if any (the same as point 1), then, after a specific time put
> it in maintenance state and after that set weight back to 100% and state to
> ready. In this case I can have a haproxy reload while "draining" with set
> weight 0 if using server-state-file feature.
> 
> Is this correct ?

Well, normally the drain state *is* saved to the file, but there is a
caveat there which I don't exactly remember. Upon reload it is deduced
from the fact that the weight was zero in the state file and not zero
in the config I believe. There's also something else which is that the
drain state cannot be set from the config and only from the CLI or agent,
so we must be careful not to create a "sticky" state by which a server
cannot be enabled anymore. In fact, the problem with server-state is that
if we save too many information we prevent the configuration changes
from being considered, and if we save too little, we lose states. So
we have to compose between what is found in the state file and what
appears in the config to try to guess what was changed on purpose.

I remember we had a discussion with Baptiste which ended like "there may
be reports for specific use cases we might have to adapt for".

So in short, if the drain state is not properly saved for you, maybe we
have to study how to better save it without affecting the ability to
modify the config and reload (since reloads are done for config changes).

Willy




Re: Set State to DRAIN vs set weight 0

2016-01-22 Thread Alex
Hi,

Thank you for the answer, this is very helpful.
So to sum up my understanding, usually drain is used in operations by
setting the server in this state for a specific amount of time and then put
in maintenance state. So:

1. If I have an automated operation process which sets a server state to
drain, which means the server will process only current connections and new
sticky connections if any, I should not have any haproxy reload until the
state is changed to maintenance since the drain state will be lost even if
using server-state-file feature.

2. However, the automated operation process can use set weight 0 which
means the server will process only current connections and new sticky
connections if any (the same as point 1), then, after a specific time put
it in maintenance state and after that set weight back to 100% and state to
ready. In this case I can have a haproxy reload while "draining" with set
weight 0 if using server-state-file feature.

Is this correct ?

Thank you,
Alex




On Thu, Jan 21, 2016 at 9:00 PM, Willy Tarreau  wrote:

> Hi,
>
> On Wed, Jan 20, 2016 at 10:53:46AM +0200, Alex wrote:
> > Hello,
> >
> > I've found another difference - regarding "seamless server states",
> > according to my testing using version 1.6.3 administrative state DRAIN is
> > not preserved after a reload but set weight 0 is preserved.
> > For my use case, using DRAIN seems the logical choice but because of the
> 2
> > issues that I have - state DRAIN is not preserved after a reload (this
> > issue has a higher weight :) ) and state DRAIN is not highlighted blue in
> > the stats page, I need to use set weight 0.
> > Do you think these can be considered bugs and have a chance to be solved
> ?
>
> No these are not bugs, they were done on purpose based on user demand.
> The reason is double :
>   - a number of users want to be able to temporarily put a server in
> drain mode without having to change its weight. One of the reasons
> is that you may have some automatic tools adjusting the weight once
> in a while using the CLI based on the overall load distribution. For
> example think about a cache farm where you would adjust weights to
> balance the disk I/O usage between the cache nodes. The second reason
> was the agent, where it's convenient to have an agent report an
> administrative state (up/down/drain) without touching the weights at
> all that are managed a different way.
>
>   - the second reason is that for people doing live operations, it was
> problematic to see the same color on the stats page for a server which
> uses a weight of zero based on its load or whatever agent, and a server
> being stopping that requires specific care.
>
> So in the end the DRAIN status was added to match exactly these
> expectations.
>
> In short, if you consider that your server is overloaded and should take
> less load, you should lower its weight, even if that goes down to zero (it
> will then not receive any traffic). But if you want your server to be
> evicted
> from load balancing to stop accepting new users but still process existing
> users, what you want is a drain mode. That will not modify your servers'
> weight and once you leave the drain mode, you'll get back up to speed with
> the same weight as the one you were running on.
>
> When you think from a tools perspective, you'll figure that :
>   - weight is controlled by performance monitoring tools ;
>   - drain is controlled by operations processes
>
> Hoping this helps,
> Willy
>
>


-- 
Alex S


Re: Reloading haproxy without dropping connections

2016-01-22 Thread Pedro Mata-Mouros
Hi,

Haven’t had the chance to implement this yet, but maybe these links can get you 
started:

http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
http://inside.unbounce.com/product-dev/haproxy-reloads/

It’d be cool to have a sort of “officially endorsed” way of achieving this.

Best,

Pedro.



> On 22 Jan 2016, at 00:38, CJ Ess  wrote:
> 
> One of our sore points with HAProxy has been that when we do a reload there 
> is a ~100ms gap where neither the old or new HAproxy processes accept any 
> requests. See attached graphs. I assume that during this time any connections 
> received to the port are dropped. Is there anything we can do so that the old 
> process keeps accepting requests until the new process is completely 
> initialized and starts accepting connections on its own?
> 
> I've looked into fencing the restart with iptable commands to blackhole TCP 
> SYNs, and I've looked into the huptime utility though I'm not sure 
> overloading libc functions is the best approach long term. Any other 
> solutions? 
> 
> 
> 
> 
> 
>