Re: Old style OCSP not working anymore?

2023-07-21 Thread Sander Klein

On 2023-07-21 11:51, Jarno Huuskonen wrote:

If I change the order of ipv4 / ipv6 binds (so bind ipv6@:::443 name
v6ssl... is first) then haproxy(2.8.1) sends ocsp with ipv6 connection 
and

not with ipv4.


Hmmm, I cannot reproduce this, but this might be because I have multiple 
frontends with multiple bind statements. Would be funny if only the 
first bind statement works in the first frontend.


Sander



Re: Old style OCSP not working anymore?

2023-07-20 Thread Sander Klein

On 2023-07-20 11:14, William Lallemand wrote:

On Thu, Jul 20, 2023 at 10:23:21AM +0200, Sander Klein wrote:

On 2023-07-19 11:00, William Lallemand wrote:
"show ssl ocsp-resonse" gives me a lot of output like:

Certificate ID key : *LONGID*
Certificate path : /parth/to/cert.pem
  Certificate ID:
Issuer Name Hash: *HASH*
Issuer Key Hash: *ANOTHERHASH*
Serial Number: *SERIAL*



You should check with the path argument so it gives you the date and
status.


Okay, so, on HAProxy 2.8.1 with the path argument I get a correct 
response:


OCSP Response Data:
OCSP Response Status: successful (0x0)
Response Type: Basic OCSP Response
Version: 1 (0x0)
Responder Id: C = US, O = Let's Encrypt, CN = R3
Produced At: Jul 18 07:22:00 2023 GMT
Responses:
Certificate ID:
  Hash Algorithm: sha1
  Issuer Name Hash: 48DAC9A0FB2BD32D4FF0DE68D2F567B735F9B3C4
  Issuer Key Hash: 142EB317B75856CBAE500940E61FAF9D8B14C2C6
  Serial Number: 0323CDB93D804581B31A8D0CB737AD57728D
Cert Status: good
This Update: Jul 18 07:00:00 2023 GMT
Next Update: Jul 25 06:59:58 2023 GMT

Signature Algorithm: sha256WithRSAEncryption
 37:d6:5a:2a:f8:b6:36:a7:5b:b8:1a:7b:24:39:a4:33:61:b7:
 68:85:50:bf:5f:cd:e7:17:1b:9b:cb:c5:fa:31:60:ad:96:71:
 f3:39:aa:09:f1:d2:5f:fa:d1:29:a6:3e:27:75:b7:f4:68:7b:
 83:d1:00:7d:e5:52:63:52:56:0f:a3:9c:1c:49:92:1b:a9:6a:
 f5:3d:0a:e0:73:8d:ed:89:4b:19:b9:ad:17:7d:ca:f3:bc:3e:
 6d:5f:7c:37:95:f2:50:2f:a2:ed:14:e4:eb:15:dd:7b:eb:93:
 0e:17:62:cb:14:6b:1c:41:6a:07:ba:9b:58:33:c0:5b:5d:32:
 c3:f6:ad:c7:a7:42:b7:a2:6e:f0:fd:8c:94:d0:e4:87:bf:fa:
 9c:79:19:fd:54:d8:40:2a:71:6d:9b:f4:1f:42:78:fa:d1:5c:
 ac:66:46:c6:2e:59:a3:b1:f1:42:3b:e8:91:6a:85:1d:eb:7d:
 12:da:0f:35:8f:99:50:13:fa:91:08:25:a9:83:f0:c2:a9:d3:
 71:f2:85:5f:3e:65:0e:93:ab:d0:39:89:49:b7:02:01:56:de:
 e9:2d:4c:17:e4:58:a2:ea:b0:d0:66:74:a5:ac:91:2e:4f:e0:
 1f:bf:f8:b9:ac:99:32:17:94:9a:0a:ac:e6:78:d9:73:9a:01:
 f2:1d:75:82


Jul 20 10:14:30 some.hostname.nl haproxy[452783]: x.x.x.x:54404
[20/Jul/2023:10:14:30.375] cluster1-in/3: SSL handshake failure
(error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad
certificate)



This message could be a lot of things, a wrongly generated certificate,
unsupported signature algorithms, incorrect chain...


They are plain lets encrypt certificates created with acme.sh and with 
ocsp must-staple enabled. Moreover, they work in 2.6.14.



Downgrading to 2.6.14 fixes it again.


I don't see why it would change like this, did you change the openssl
version linked to haproxy? Recent distribution restrained some old
algorithms and that could be a problem. We didn't changed much things 
in

the loading between 2.6 and 2.8 so I'm not seeing why the behavior
changed.


The packages I use are the Debian 11 packages from Vincent Bernat. 
Looking at the ldd output, nothing has changed. Also no libraries are 
changed/upgraded when HAProxy is upgraded.



The best thing to do is to test with `openssl s_client -showcerts
-connect some.hostname.nl:443` with both your versions to identify what
changed.


I've tested with 'openssl s_client -showcerts -connect mydomain.com:443 
-servername mydomain.com -status -tlsextdebug''


On 2.6.14 I get an OCSP response, on 2.8.1 I get:

"OCSP response: no response sent"

It really looks like HAProxy doesn't want to send the response coming 
from the file. Is there any more information I can gather?



Regards,

Sander




Re: Old style OCSP not working anymore?

2023-07-20 Thread Sander Klein

On 2023-07-19 11:00, William Lallemand wrote:

On Mon, Jul 17, 2023 at 08:12:59PM +0200, Sander Klein wrote:

On 2023-07-17 15:17, William Lallemand wrote:
> On Thu, Jul 13, 2023 at 05:01:06PM +0200, Sander Klein wrote:
>> Hi,
>>
>> I tried upgrading from 2.6.14 to 2.8.1, but after the upgrade I
>> couldn't
>> connect to any of the sites behind it.
>>
>> While looking at the error it seems like OCSP is not working anymore.
>> Right now I have a setup in which I provision the certificates with
>> the
>> corresponding ocsp file next to it. If this not supported anymore?
>
> This is supposed to still be working, however we could have introduced
> bugs when building the ocsp-update. Are you seeing errors during the
> OCSP file loading?

I don't see any errors, not even when I start haproxy by hand with 
'-d'.

It's just like the ocsp isn't used at al. Also started haproxy with
strace attached and I see the ocsp files are loaded.

Regards,

Sander



Did you check with "show ssl ocsp-response" ?

http://docs.haproxy.org/2.8/management.html#show%20ssl%20ocsp-response


"show ssl ocsp-resonse" gives me a lot of output like:

Certificate ID key : *LONGID*
Certificate path : /parth/to/cert.pem
 Certificate ID:
   Issuer Name Hash: *HASH*
   Issuer Key Hash: *ANOTHERHASH*
   Serial Number: *SERIAL*

So I guess that's correct. But then I do a request for a site I get:

Jul 20 10:14:30 some.hostname.nl haproxy[452783]: x.x.x.x:54404 
[20/Jul/2023:10:14:30.375] cluster1-in/3: SSL handshake failure 
(error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad 
certificate)


Downgrading to 2.6.14 fixes it again.

Sander



Re: Old style OCSP not working anymore?

2023-07-17 Thread Sander Klein

On 2023-07-17 15:17, William Lallemand wrote:

On Thu, Jul 13, 2023 at 05:01:06PM +0200, Sander Klein wrote:

Hi,

I tried upgrading from 2.6.14 to 2.8.1, but after the upgrade I 
couldn't

connect to any of the sites behind it.

While looking at the error it seems like OCSP is not working anymore.
Right now I have a setup in which I provision the certificates with 
the

corresponding ocsp file next to it. If this not supported anymore?


This is supposed to still be working, however we could have introduced
bugs when building the ocsp-update. Are you seeing errors during the
OCSP file loading?


I don't see any errors, not even when I start haproxy by hand with '-d'. 
It's just like the ocsp isn't used at al. Also started haproxy with 
strace attached and I see the ocsp files are loaded.


Regards,

Sander



Re: Old style OCSP not working anymore?

2023-07-14 Thread Sander Klein

Hi,

On 2023-07-14 01:56, Shawn Heisey wrote:

On 7/13/23 09:01, Sander Klein wrote:
I tried upgrading from 2.6.14 to 2.8.1, but after the upgrade I 
couldn't connect to any of the sites behind it.


While looking at the error it seems like OCSP is not working anymore. 
Right now I have a setup in which I provision the certificates with 
the corresponding ocsp file next to it. If this not supported anymore?


Does your certificate have "must-staple" configured?  That is the only
way I can imagine an OCSP problem would keep websites from working.  I
do ocsp stapling with haproxy, but I don't use "must-staple".  I do
not believe that ocsp stapling is supported widely enough yet to
declare that it MUST happen.


Yes I do have must-staple enabled, but I also update regularly and 
restart HAProxy. The thing is, with HAProxy 2.8.1 it doesn't work at all 
anymore. Not even with fresh ocsp files and a fresh restart.



I uploaded a script to github.  This is the script I used before
haproxy gained the ability to do its own OCSP updates.  The script
updates the .ocsp file(s) and informs haproxy about the new
response(s) so haproxy does not need to be restarted.:

https://github.com/elyograg/haproxy-ocsp-elyograg

The script relies on mktemp, openssl, socat, and base64.


I can have a look at this, but I think I have about the same setup right 
now.



Sander



Old style OCSP not working anymore?

2023-07-13 Thread Sander Klein

Hi,

I tried upgrading from 2.6.14 to 2.8.1, but after the upgrade I couldn't 
connect to any of the sites behind it.


While looking at the error it seems like OCSP is not working anymore. 
Right now I have a setup in which I provision the certificates with the 
corresponding ocsp file next to it. If this not supported anymore?


Regards,

Sander




Re: SPOE

2023-06-15 Thread Sander Klein

On 2023-06-15 22:11, Sander Klein wrote:

Hi,

Is there a way to filter which URL's go through SPOE and which are
just handled directly in a single frontend?

I can't seem to find it in the documentantion. I'm currently on HAProxy 
2.6.14.


Right after I mailed this I read SPOE.txt a bit better and saw you can 
influence it with for instance on-http-request and HAProxy ACLs.


So, I think I can figure it out ;-)

Greets,

Sander



SPOE

2023-06-15 Thread Sander Klein

Hi,

Is there a way to filter which URL's go through SPOE and which are just 
handled directly in a single frontend?


I can't seem to find it in the documentantion. I'm currently on HAProxy 
2.6.14.


Regards,

Sander Klein



Issue with uploads and HAProxy 2.4.11

2022-01-10 Thread Sander Klein

Hi,

I've upgraded to HAProxy 2.4.11 and now I seem to have a problem with 
bigger file uploads (>70MB).


When uploading a file I get a 500 back from HAProxy, and if I retry it 
immediately it most of the time succeeds. Downgrading to 2.4.10 fixes 
the issue. The log I get is:


Jan 10 12:09:45 [redacted] haproxy[21823]: 2001:67c:[redacted] 
[10/Jan/2022:12:09:20.543] [redacted]~ [redacted]/[redacted] 
11198/0/0/-1/25137 500 1991 - - IH-- 957/282/0/0/0 0/0 
{[redacted].[redacted].com|Mozilla/5.0 
(Mac|80349066|https://[redacted].[redacted].com/upload} {} “POST 
https://[redacted].[redacted].com/upload/process?projectId=3431=149 
HTTP/2.0”


The frontend is HTTP/2.0 and the backend is NGINX talking HTTP/1.1 
(non-TLS).


The config is quite large, but I think it boils down to:

---
frontend [redacted]
bind [redactes]]:80 transparent
bind 2001:67c:[redacted]:80 transparent

	bind [redacted]:443 transparent ssl crt /etc/haproxy/ssl/[redacted] 
strict-sni alpn h2,http/1.1 npn h2,http/1.1
	bind 2001:67c:[redacted]:443 transparent ssl crt 
/etc/haproxy/ssl/[redacted] strict-sni alpn h2,http/1.1 npn h2,http/1.1


mode http
maxconn 16384

option httplog
option dontlog-normal
option http-ignore-probes
option forwardfor
option http-buffer-request

capture request header Host len 64
capture request header User-Agent   len 16
capture request header Content-Length   len 10
capture request header Referer  len 256
capture response header Content-Length  len 10


acl [some ACLs here]
acl [some ACLs here]

http-request deny if [an ACL]
http-request deny if [another ACL]

use_backend [failing-backend]   if [ACL]
use_backend 
%[req.hdr(host),lower,regsub(^www\.,,i),map(/etc/haproxy/map.d/file.map,yes-backend)]
default_backend another-backend

backend failing-backend
fullconn256
modehttp

balance roundrobin

option abortonclose
option prefer-last-server
option redispatch
option httpchk GET /check-thingy HTTP/1.0
http-check expect status 200

	default-server weight 100 maxconn 20 check inter 2s rise 3 fall 3 
slowstart 5m agent-check agent-port 8081 agent-inter 20s

server server1 [redacted]:80 cookie cookie1
server server2 [redacted]:80 cookie cookie2

# Sorry Server
server outage 127.0.0.1:80 backup

retries 1
---

If any more info is needed, please let me know.

Regards,

Sander Klein



Re: Table sticky counters decrementation problem

2021-03-30 Thread Sander Klein

On 2021-03-30 19:15, Willy Tarreau wrote:

On Tue, Mar 30, 2021 at 07:07:41PM +0200, Sander Klein wrote:

On 2021-03-30 18:14, Willy Tarreau wrote:

> No, my chance is already gone :-)
>
> OK, I'm pushing this one into 2.3, re-running the tests a last time,
> and issuing 2.3.9. We'll be able to issue 2.2.12 soon finally, as users
> of 2.2 are still into trouble between 2.2.9 and 2.2.11 depending on the
> bug they try to avoid :-/

Somehow either my patching skillz have gone down the drain or this fix
doesn't work for me on 2.2.11. I still see the same behavior.


No worries, I'll backport whatever is needed so that you can test the
latest maintenance version, it will make you more confident in your
tests.


Yes! It works. Sometimes you just need to go home, eat something and 
look again. It did need a full restart to get it going again though.


Sander



Re: Table sticky counters decrementation problem

2021-03-30 Thread Sander Klein

On 2021-03-30 18:14, Willy Tarreau wrote:


No, my chance is already gone :-)

OK, I'm pushing this one into 2.3, re-running the tests a last time,
and issuing 2.3.9. We'll be able to issue 2.2.12 soon finally, as users
of 2.2 are still into trouble between 2.2.9 and 2.2.11 depending on the
bug they try to avoid :-/


Somehow either my patching skillz have gone down the drain or this fix 
doesn't work for me on 2.2.11. I still see the same behavior.


Sander



Re: Table sticky counters decrementation problem

2021-03-30 Thread Sander Klein

On 2021-03-30 15:13, Willy Tarreau wrote:


diff --git a/src/time.c b/src/time.c
index 0cfc9bf3c..fafe3720e 100644
--- a/src/time.c
+++ b/src/time.c
@@ -268,7 +268,7 @@ void tv_update_date(int max_wait, int interrupted)
old_now_ms = global_now_ms;
do {
new_now_ms = old_now_ms;
-   if (tick_is_lt(new_now_ms, now_ms))
+   if (tick_is_lt(new_now_ms, now_ms) || !new_now_ms)
new_now_ms = now_ms;
}  while (!_HA_ATOMIC_CAS(_now_ms, _now_ms, 
new_now_ms));


Do I need to apply this on top of the other fixes? Or should this be 
done on the vanilla 2.2.11?


Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Table sticky counters decrementation problem

2021-03-30 Thread Sander Klein

On 2021-03-30 10:17, Lukas Tribus wrote:

Hello Thomas,


this is a known issue in any release train other than 2.3 ...

https://github.com/haproxy/haproxy/issues/1196

However neither 2.3.7 (does not contain the offending commits), nor
2.3.8 (contains all the fixes) should be affected by this.


Are you absolutely positive that you are running 2.3.8 and not
something like 2.2 or 2.0 ? Can you provide the full output of haproxy
-vv?



I can confirm I'm seeing this on 2.3.8 as well. But moreover, I also see 
this happening on 2.2.11 with Willy's patches in it as well.


I am very confused because I am pretty sure this problem was gone last 
week when I tested the patches and took that version in production.


Sander



0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Stick table counter not working after upgrade to 2.2.11

2021-03-23 Thread Sander Klein

On 2021-03-23 09:32, Willy Tarreau wrote:

Guys,

These two patches address it for me, and I could verify that they apply
on top of 2.2.11 and work there as well. This time I tested with two
counters at different periods 500 and 2000ms.


I've just applied your patches and tested. It seems to work now. Thanks.

Sander



Stick table counter not working after upgrade to 2.2.11

2021-03-22 Thread Sander Klein

Hi,

I have upgraded to haproxy 2.2.11 today and it seems like my stick table 
counter is not working anymore. It is only increasing on every hit and 
never decreases anymore. Downgrading back to 2.2.10 fixes this issue.


The setup is a replicated stick table like:

```
table apikey type ipv6 size 1m expire 24h store http_req_rate(2s)
```

And in my frontend I use:

```
acl has_apiKey url_param(apiKey) -m found
acl is_apiabuser src_http_req_rate(lb1/apikey) gt 10
acl is_rejectapi src_http_req_rate(lb1/apikey) gt 20

http-request track-sc0 src table lb1/apikey if has_apiKey 
!in_picturae_ip

http-request deny deny_status 429 if is_rejectapi
http-request lua.delay_request if is_apiabuser


Is this a know issue? I didn't find anything on Github.

Regards,

Sander



Re: Haproxy 2.2.0 segfault

2020-07-24 Thread Sander Klein

On 2020-07-20 21:41, Sander Klein wrote:

On 2020-07-20 19:16, Christopher Faulet wrote:

Le 20/07/2020 à 17:22, Sander Klein a écrit :

On 2020-07-20 16:38, Christopher Faulet wrote:

Could you retry with the latest 2.2 snapshot
(http://www.haproxy.org/download/2.2/src/snapshot/haproxy-ss-LATEST.tar.gz)
?


Yes, I just did. Still a segfault. Just in case the new core is 
below.



Ok. Thanks to have tested. Could you share your configuration please ?
Don't forget to sanitize it if necessary.


I will sent if off list.



FYI, I've just tested with HAProxy 2.2.1 and I cannot reproduce it 
anymore.


Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Haproxy 2.2.0 segfault

2020-07-20 Thread Sander Klein

On 2020-07-20 19:16, Christopher Faulet wrote:

Le 20/07/2020 à 17:22, Sander Klein a écrit :

On 2020-07-20 16:38, Christopher Faulet wrote:

Could you retry with the latest 2.2 snapshot
(http://www.haproxy.org/download/2.2/src/snapshot/haproxy-ss-LATEST.tar.gz)
?


Yes, I just did. Still a segfault. Just in case the new core is below.


Ok. Thanks to have tested. Could you share your configuration please ?
Don't forget to sanitize it if necessary.


I will sent if off list.

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Haproxy 2.2.0 segfault

2020-07-20 Thread Sander Klein

On 2020-07-20 16:38, Christopher Faulet wrote:

Could you retry with the latest 2.2 snapshot
(http://www.haproxy.org/download/2.2/src/snapshot/haproxy-ss-LATEST.tar.gz)
?


Yes, I just did. Still a segfault. Just in case the new core is below.

Reading symbols from haproxy...Reading symbols from 
/usr/lib/debug/.build-id/3e/19e8d25a73e3ae6245be1b59986c1249b3792b.debug...done.

done.
[New LWP 12514]
[New LWP 12516]
[New LWP 12517]
[New LWP 12515]
[Thread debugging using libthread_db enabled]
Using host libthread_db library 
"/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/haproxy -sf 4951 -Ws -f 
/etc/haproxy/conf.d -p /run/haproxy.pid -S /r'.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x55830c624f97 in si_cs_send (cs=cs@entry=0x55830da71370) at 
include/haproxy/channel.h:128

128 include/haproxy/channel.h: No such file or directory.
[Current thread is 1 (Thread 0x7f127d86c280 (LWP 12514))]
(gdb) t a a bt full

Thread 4 (Thread 0x7f127cc3f700 (LWP 12515)):
#0  0x7f127dc77a97 in shutdown () at 
../sysdeps/unix/syscall-template.S:78

No locals.
#1  0x55830c55efcc in conn_sock_shutw (c=0x7f1270397240, 
c=0x7f1270397240, clean=1) at include/haproxy/connection.h:218

No locals.
#2  h1_shutw_conn (conn=0x7f1270397240, mode=mode@entry=CS_SHW_NORMAL) 
at src/mux_h1.c:2604

h1c = 0x7f12781f8370
__FUNCTION__ = "h1_shutw_conn"
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
#3  0x55830c55f129 in h1_shutw (cs=0x7f126826d670, 
mode=CS_SHW_NORMAL) at src/mux_h1.c:2593

h1s = 0x7f1278461510
h1c = 0x7f12781f8370
__FUNCTION__ = "h1_shutw"
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
#4  0x55830c6216d5 in cs_shutw (mode=CS_SHW_NORMAL, cs=out>) at include/haproxy/connection.h:258

No locals.
#5  stream_int_shutw_conn (si=0x7f126829ee28) at 
src/stream_interface.c:1052

cs = 
conn = 
ic = 0x7f126829eb90
oc = 
#6  0x55830c5c822b in si_shutw (si=0x7f126829ee28) at 
include/haproxy/stream_interface.h:428

--Type  for more, q to quit, c to continue without paging--c
No locals.
#7  process_stream (t=, context=0x7f126829eb80, 
state=) at src/stream.c:2264

srv = 
s = 0x7f126829eb80
sess = 
rqf_last = 
rpf_last = 
rq_prod_last = 
rq_cons_last = 
rp_cons_last = 
rp_prod_last = 
req_ana_back = 
req = 0x7f126829eb90
res = 0x7f126829ebf0
si_f = 0x7f126829ee28
si_b = 0x7f126829ee80
rate = 
#8  0x55830c687ec3 in run_tasks_from_lists 
(budgets=budgets@entry=0x7f127cc1c2dc) at src/task.c:476

process = 
tl_queues = 
t = 0x55830ebde640
budget_mask = 7 '\a'
done = 
queue = 
state = 
ctx = 
#9  0x55830c6888de in process_runnable_tasks () at src/task.c:672
tt = 0x55830c887600 
lrq = 
grq = 
t = 
max = {0, 196, 0}
max_total = 
tmp_list = 
queue = 3
max_processed = 
#10 0x55830c641d47 in run_poll_loop () at src/haproxy.c:2905
next = 
wake = 
#11 0x55830c6420e9 in run_thread_poll_loop (data=) at 
src/haproxy.c:3070

ptaf = 
ptif = 
ptdf = 
ptff = 
init_left = 0
init_mutex = {__data = {__lock = 0, __count = 0, __owner = 0, 
__nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 
0x0, __next = 0x0}}, __size = '\000' , __align = 0}
init_cond = {__data = {{__wseq = 9, __wseq32 = {__low = 9, 
__high = 0}}, {__g1_start = 7, __g1_start32 = {__low = 7, __high = 0}}, 
__g_refs = {0, 0}, __g_size = {0, 0}, __g1_orig_size = 4, __wrefs = 0, 
__g_signals = {0, 0}}, __size = "\t\000\000\000\000\000\000\000\a", 
'\000' , "\004", '\000' , __align = 
9}
#12 0x7f127e23cfa3 in start_thread (arg=) at 
pthread_create.c:486

ret = 
pd = 
now = 
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {139717379356416, 
-8923786372068320456, 140722675008942, 140722675008943, 139717379356416, 
94021338073344, 8794595559024712504, 8794593102455593784}, 
mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 
0x0, cleanup = 0x0, canceltype = 0}}}

not_first_call = 
#13 0x7f127dc764cf in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

No locals.

Thread 3 (Thread 0x7f12777fe700 (LWP 12517)):
#0  0x7f127dc767ef in epoll_wait (epfd=106, events=0x7f1268015c10, 
maxevents=200, timeout=timeout@entry=4) at 
../sysdeps/unix/sysv/linux/epoll_wait.c:30

resultvar = 18446744073709551612
 

Re: Haproxy 2.2.0 segfault

2020-07-20 Thread Sander Klein

In the meantime I've captured a coredump. It gives the following output:

GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from haproxy...Reading symbols from 
/usr/lib/debug/.build-id/d7/b1ac9548a895bfb6276bb7491ae8d396835cf0.debug...done.

done.
[New LWP 1821]
[New LWP 1819]
[New LWP 1820]
[New LWP 1822]
[Thread debugging using libthread_db enabled]
Using host libthread_db library 
"/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/haproxy -sf 453 -Ws -f 
/etc/haproxy/conf.d -p /run/haproxy.pid -S /ru'.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x5637136d3e27 in si_cs_send (cs=cs@entry=0x7fee40d97520) at 
include/haproxy/channel.h:128

128 include/haproxy/channel.h: No such file or directory.
[Current thread is 1 (Thread 0x7fee4f5d6700 (LWP 1821))]
(gdb) t a a bt full

Thread 4 (Thread 0x7fee4edd5700 (LWP 1822)):
#0  0x7fee50df5917 in sched_yield () at 
../sysdeps/unix/syscall-template.S:78

No locals.
#1  0x563713748ee5 in ha_thread_relax () at 
include/haproxy/thread.h:233

No locals.
#2  thread_harmless_till_end () at src/thread.c:58
No locals.
#3  0x5637135c4457 in thread_harmless_end () at 
include/haproxy/thread.h:261

No locals.
#4  _do_poll (p=, exp=0, wake=1) at src/ev_epoll.c:212
status = 
fd = 
count = 
updt_idx = 
wait_time = 
old_fd = 
#5  0x5637136f0b92 in run_poll_loop () at src/haproxy.c:2952
next = 
wake = 
#6  0x5637136f0f79 in run_thread_poll_loop (data=) at 
src/haproxy.c:3070

ptaf = 
ptif = 
ptdf = 
ptff = 
init_left = 0
init_mutex = {__data = {__lock = 0, __count = 0, __owner = 0, 
__nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 
0x0, __next = 0x0}},

  __size = '\000' , __align = 0}
init_cond = {__data = {{__wseq = 9, __wseq32 = {__low = 9, 
__high = 0}}, {__g1_start = 7, __g1_start32 = {__low = 7, __high = 0}}, 
__g_refs = {0, 0}, __g_size = {0, 0},
__g1_orig_size = 4, __wrefs = 0, __g_signals = {0, 0}}, 
__size = "\t\000\000\000\000\000\000\000\a", '\000' , 
"\004", '\000' ,

  __align = 9}
#7  0x7fee513d4fa3 in start_thread (arg=) at 
pthread_create.c:486

ret = 
pd = 
now = 
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140661502072576, 
-6298750294423139148, 140728480730014, 140728480730015, 140661502072576, 
94794550343936,
6290062261562477748, 6290109760261295284}, 
mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 
0x0, cleanup = 0x0, canceltype = 0}}}

not_first_call = 
#8  0x7fee50e0e4cf in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

No locals.

Thread 3 (Thread 0x7fee4fdd7700 (LWP 1820)):
#0  0x7ffde71c8a49 in clock_gettime ()
No symbol table info available.
#1  0x7fee50e1bff6 in __GI___clock_gettime (clock_id=1, 
tp=0x7fee4fdb40e0) at ../sysdeps/unix/clock_gettime.c:115

retval = -1
sc_ret = 
vdsop = 
resultvar = 
__arg2 = 
__arg1 = 
_a2 = 
_a1 = 
sc_ret = 
vdsop = 
resultvar = 
__arg2 = 
__arg1 = 
_a2 = 
_a1 = 
#2  0x5637137361e5 in now_mono_time () at include/haproxy/time.h:523
ts = {tv_sec = 140661388919240, tv_nsec = 0}
ts = 
#3  __task_wakeup (t=0x56371727c270, root=0x563713936610 
) at src/task.c:149

No locals.
#4  0x563713673f9d in task_wakeup (f=256, t=) at 
include/haproxy/task.h:196

state = 
root = 
state = 
root = 
#5  stream_create_from_cs (cs=cs@entry=0x5637164df550) at 
src/stream.c:280

strm = 
#6  0x56371361088e in h1s_new_cs (h1s=0x563716d9c420) at 
src/mux_h1.c:492

cs = 0x5637164df550
cs = 
__FUNCTION__ = "h1s_new_cs"
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
__l = 
__x = 
--Type  for more, q to quit, c to continue without paging--c
__l = 
__x = 
__l = 
__x = 
__l = 
#7  h1s_create 

Haproxy 2.2.0 segfault

2020-07-20 Thread Sander Klein

Hi,

Last Thursday I've upgraded to HAProxy 2.2.0 from Vincent Bernat's 
marvelous repository, but now I experience segfaults. I haven't 
investigated it further since I just discovered it. But, it seems 
related to reloading HAProxy with config changes.


The logs show:
Jul 20 09:51:05 lb01-a kernel: [5415518.837709] haproxy[7856]: segfault 
at 0 ip 556323bf6d1b sp 7fe5bb76b1c8 error 6 in 
haproxy[556323a57000+1a6000]
Jul 20 09:51:05 lb01-a kernel: [5415518.837721] Code: 83 e0 fe 48 f7 40 
08 fe ff ff ff 0f 84 a6 00 00 00 48 8b 48 10 f7 d2 83 e2 01 48 8b 14 d0 
89 ce 49 89 c8 83 e6 01 49 83 e0 fe <

49> 89 14 f0 f6 c2 01 74 5c 48 89 4a 0f 48 c7 40 10 00 00 00 00 48
Jul 20 09:51:05 lb01-a systemd[1]: haproxy.service: Main process exited, 
code=exited, status=139/n/a
Jul 20 09:51:05 lb01-a systemd[1]: haproxy.service: Failed with result 
'exit-code'.
Jul 20 09:51:06 lb01-a systemd[1]: haproxy.service: Service 
RestartSec=100ms expired, scheduling restart.
Jul 20 09:51:06 lb01-a systemd[1]: haproxy.service: Scheduled restart 
job, restart counter is at 11.

Jul 20 09:51:06 lb01-a systemd[1]: Stopped HAProxy Load Balancer.
Jul 20 09:51:06 lb01-a systemd[1]: Starting HAProxy Load Balancer...
Jul 20 09:51:07 lb01-a systemd[1]: Started HAProxy Load Balancer.

I'm not sure where to start with debugging. So, suggestions are welcome.

Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Sudden queueing to backends

2020-03-10 Thread Sander Klein
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }

http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Ssl off if !{ ssl_fc }

	http-response add-header Strict-Transport-Security "max-age=31536000;" 
if { ssl_fc }


use_backend 
%[req.hdr(host),lower,regsub(^www\.,,i),map(/path/to/map/filename.map,default-cluster)]
default_backend default-cluster

backend some-backend
fullconn4096
modehttp

balance roundrobin

option  abortonclose
option  prefer-last-server
option  redispatch
option  httpchk GET /php-fpm-ping HTTP/1.0
http-check expect status 200

	default-server weight 100 agent-check agent-port 8081 agent-inter 20s 
check inter 2s rise 3 fall 3 slowstart 5m maxconn 50

server name1 abc:abc:abc::1:80 cookie name1
server name2 abc:abc:abc::2:80 cookie name2

# Sorry Server
server outage 127.0.0.1:80 backup

retries 1


Regards,

Sander Klein







Re: FW: HAProxy: Information request

2020-02-27 Thread Sander Klein

Hi,

please be aware you are posting to a public mailinglist. You might want
to check where you sent your emails.

Regards,

Sander Klein



On 2020-02-27 22:14, EMEA Request wrote:

Hi Team,

Apologies for delayed response.

Can you please help with the details provided below and provide a
quote.

Thanks and Regards,

 [3]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE

 anandita.sha...@softwareone.com [4]  | www.softwareone.com [3]
 Phone no : +91 8950320646

 Check out: Why SoftwareONE? [8] | PyraCloud [9] | Customer
Transformation [10]

From: Parsons, Branden 
Sent: Thursday, February 27, 2020 8:14 PM
To: Sharma, Anandita 
Subject: RE: HAProxy: Information request

Hi Anandita

Please see below

On AWS,  but not sure on the number of connections, can they get a
quote without knowing that? We will set up a call once we have an idea
of price?

With kind regards,

Branden Parsons

Internal Sales Executive

SoftwareONE UK Ltd

Direct. +44 203 3729 481

From: Sharma, Anandita 
Sent: 24 February 2020 14:16
To: Parsons, Branden 
Subject: FW: HAProxy: Information request

Hi Branden,

FYI

 [3]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE

 anandita.sha...@softwareone.com [4]  | www.softwareone.com [3]
 Phone no : +91 8950320646

 Check out: Why SoftwareONE? [8] | PyraCloud [9] | Customer
Transformation [10]

From: Anamarija Murgic 
Sent: Friday, January 17, 2020 7:23 PM
To: EMEA Request 
Cc: Sean Meroth 
Subject: Re: HAProxy: Information request

Hi Anandita,

Thanks for letting me know.

Have a great weekend!

Best,
Anamarija

On 17/01/2020 1:34 PM, EMEA Request wrote:


Hi Anamarija ,

Apologies for delay in reply.

Our team is in contact with customer for some clarifications.

Will get back to you after clarifying.

Thanks and Regards,

[3]

Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE

anandita.sha...@softwareone.com [4]  | www.softwareone.com [3]
Phone no : +91 8950320646

Check out: Why SoftwareONE? [5] | PyraCloud [6] | Customer
Transformation [7]

From: Anamarija Murgic 
Sent: Tuesday, January 14, 2020 4:20 PM
To: Sharma, Anandita 
Cc: Sean Meroth 
Subject: Re: HAProxy: Information request

Hello Anandita,

I am following up on my previous email as I haven't heard back from
you. Please let me know when is a good time to talk?

Looking forward to hearing from you soon.

Thanks,
Anamarija

On 07/01/2020 6:08 PM, Anamarija Murgic wrote:


Hi Anandita,

My colleagues forwarded me your email request sent to our Open
source email asking for the product information.

We have both, ALOHA LB, virtual or hardware and we have our
software only HAProxy Enterprise Edition (HAPEE) that you would
install on your their own infrastructure.  HAProxy Enterprise
Edition (HAPEE) comes as an annual subscription per server while
ALOHA appliances prices are based on the application performance
you need to sustain.

It would be very helpful to know:

- Are they using current appliance on Azure or AWS
- The number of new connections (HTTP or HTTPS) per second
- The number of concurrent connections per second.

Also, if possible at all, if you can share with us their current
ADC configuration.

In general, we've found that it's best to get some more context in
a quick conference call that will help us understand the use case
of TheTrainline.com. Then we can make the best recommendation for
you and the project and go over pricing.

Please let me know your availability this week, tomorrow or Friday
afternoon?

Thanks,
Anamarija

--

Anamarija Murgic

Sr. Account Executive

HAProxy Technologies - Powering your uptime!

15 Avenue Raymond Aron | 92160 Antony, France

+385 99 44 11 521 | www.haproxy.com [1] | Unsubscribe [2]


--

Anamarija Murgic

Sr. Account Executive

HAProxy Technologies - Powering your uptime!

15 Avenue Raymond Aron | 92160 Antony, France

+385 99 44 11 521 | www.haproxy.com [1] | Unsubscribe [2]


--

Anamarija Murgic

Sr. Account Executive

HAProxy Technologies - Powering your uptime!

15 Avenue Raymond Aron | 92160 Antony, France

+385 99 44 11 521 | www.haproxy.com [11] | Unsubscribe [12]

Links:
--
[1]
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.com_d=DwMDaQc=-5LgSL_TkF3nGRQI95ci6eeFVMQ5VESHPf5koMIAxOAr=t_QP427c6yP1s5t47wSRYPnCW5oQW71pV6vHdqbRap8m=SdHBecwJYxDvk1OEHAJB19YxCUoN___V5z6l1bRc8Dws=VjsyrZ9hejKpS-zBGVukDcHhAXXYjJsF8nVP92Ocg6Ue>
 [2]
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.com_manage-2Demail-2Dpreferences_d=DwMDaQc=-5LgSL_TkF3nGRQI95ci6eeFVMQ5VESHPf5koMIAxOAr=t_QP427c6yP1s5t47wSRYPnCW5oQW71pV6vHdqbRap8m=SdHBecwJYxDvk1OEHAJB19YxCUoN___V5z6l1bRc8DwsgFR5QK4GXUhO2mbkb-MDVmXX-OZjVZlHwRZsF3UOBUe>
 [3] http://www.softwareone.com/
[4] http://@softwareone.com
[5]
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DeGTUj4NtJP0d=DwMDaQc=-5LgSL_TkF3nGRQI95ci6eeFVMQ5VESHPf5koMIAxOAr=SfBJQfW0uf0NVY4ThIcrA41fo_36SpqIxi1clzEeEm4m=xWyRYnbDHfxkJ1P67Cs1weTnSNlfmzS78tzHZs

Re: Truncated response on 2.0.8

2019-10-28 Thread Sander Klein

On 2019-10-26 18:10, Ing. Andrea Vettori wrote:

Hello,
I'm using haproxy 2.0.8 and ssl termination with h2 and http1.1
protocols.
Since today we always used http1.1 on the backends.

I’ve tried to use http2 on the development backend but I get
truncated response (not always but very often).

Trying to connect from the server running haproxy to the backend
server using curl with http2 I never get a truncated response.
Client-side I tried with two different browsers.

Any hint on what can cause this ?


Just to confirm. I think I have the same problem in the same setup. The 
first time I noticed this was on 2.0.7. Did not have time to debug it 
yet.


Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


rate limiting

2019-09-05 Thread Sander Klein

Hi,

I was looking at implementing rate limiting in our setup. But, since we 
are handling both IPv4 and IPv6 in the same frontends and backends, I 
was wondering how I could do that.


AFAIK a stick table is either IPv4 or IPv6 and you can only have one 
stick table per frontend or backend.


Is there a way to do this without splitting up the frontends and 
backends based on the IP version?


Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Random 502's and instant 504's after upgrading

2019-07-22 Thread Sander Klein

On 2019-07-22 13:05, Sander Klein wrote:

On 2019-07-22 10:59, Christopher Faulet wrote:

Le 20/07/2019 à 19:50, Sander Klein a écrit :

Sorry, I forgot to mention, I pushed another patch that may help you.
In HAProxy 2.0, it is the commit 0bf28f856 ("BUG/MINOR: mux-h1: Close
server connection if input data remains in h1_detach()").

I don't know if your HAProxy already includes it or not. If not,
please give it a try. If your tests were made with this last commit,
it means there is a bug somewhere else.




Just tested with haproxy-ss-20190720 and I do not see any strange 502's 
anymore. Thanks!


Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Random 502's and instant 504's after upgrading

2019-07-22 Thread Sander Klein

On 2019-07-22 10:59, Christopher Faulet wrote:

Le 20/07/2019 à 19:50, Sander Klein a écrit :

Sorry, I forgot to mention, I pushed another patch that may help you.
In HAProxy 2.0, it is the commit 0bf28f856 ("BUG/MINOR: mux-h1: Close
server connection if input data remains in h1_detach()").

I don't know if your HAProxy already includes it or not. If not,
please give it a try. If your tests were made with this last commit,
it means there is a bug somewhere else.


Ah, no, I used vanilla 2.0.2 with only your other patch applied. I see 
if I can test again.


Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Random 502's and instant 504's after upgrading

2019-07-20 Thread Sander Klein

On 2019-07-19 14:05, Christopher Faulet wrote:

Le 19/07/2019 à 09:36, Sander Klein a écrit :



---
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 19 Jul 2019 07:32:03 GMT
Content-Type: application/json; charset=UTF-8
Transfer-Encoding: chunked
Vary: Accept-Encoding
Vary: Accept-Encoding
Cache-Control: private, must-revalidate
ETag: "178c3f242b0151fe57e02f6e8817ce3a"
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, PATCH, 
DELETE,

HEAD
Length: unspecified [application/json]
---

Maybe the 'Length: unspecified' has something to do with it.



No, this line is reported by wget because there is no "Content-Length" 
header.


Heh, doh, sorry about that :-)


So, as I said, I pushed a fix
(https://github.com/haproxy/haproxy/commit/03627245). It was
backported to 2.0. Could you check if it fixes your issue about 502
errors ?


I just pathed up 2.0.2 and tested it. I still get 502's but a lot less. 
I'm not sure if this is because I do less request/s or I hit something 
else. The show errors show:


---
[20/Jul/2019:19:34:45.629] backend cluster1-xx (#11): invalid response
  frontend cluster1 (#3), server xxx (#1), event #0, src x.x.x.x:52007
  buffer starts at 0 (including 0 out), 10809 free,
  len 5575, wraps at 16336, error at position 0
  H1 connection flags 0x, H1 stream flags 0x4094
  H1 msg state MSG_RPVER(10), H1 msg flags 0x1404
  H1 chunk len 0 bytes, H1 body len 0 bytes :
---

---
[20/Jul/2019:19:40:32.643] backend cluster1-xx (#11): invalid response
  frontend webservices (#18), server xxx (#2), event #13, src 
x:x:x:x:x:x:x:x:59724

  buffer starts at 0 (including 0 out), 16377 free,
  len 7, wraps at 16384, error at position 0
  H1 connection flags 0x, H1 stream flags 0x4094
  H1 msg state MSG_RPBEFORE(8), H1 msg flags 0x1404
  H1 chunk len 0 bytes, H1 body len 0 bytes :

  0  :10}]}}
---

There is of course more with the first one, but I do not want to put 
that on the mailinglist. It seems like a partial response body. I can 
send it to you private if you want.



For 504 errors, I have no idea for now.


I'm not sure about these 504's either. I had a couple of reports about 
these and 1 of our developers had it one time, but I haven't seen it 
myself or seen any proof about this. But like I said, the logs show 
nothing. I will keep my eye on this.


Sander







0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Random 502's and instant 504's after upgrading

2019-07-19 Thread Sander Klein

Hi Lukas and Christopher,

I've combined the answer of your 2 mails.

On 2019-07-18 17:17, Lukas Tribus wrote:

Could be related to:
https://github.com/haproxy/haproxy/issues/176


Probably, but I'm not doing HTTP/1 and I have not found a request to 
reproduce it with. It happens at random.



Can you provide the "show errors" output from the admin cli for those
requests, and possible try one of the mentioned workarounds
(http-reuse never or http-server-close)?


The show errors:

---
Total events captured on [19/Jul/2019:08:34:25.093] : 31

[19/Jul/2019:08:34:23.405] backend cluster1-xx (#11): invalid response
  frontend webservices (#18), server xxx (#2), event #30, src 
x.x.x.x:63290

  buffer starts at 0 (including 0 out), 16268 free,
  len 116, wraps at 16384, error at position 0
  H1 connection flags 0x, H1 stream flags 0x4094
  H1 msg state MSG_RPBEFORE(8), H1 msg flags 0x1404
  H1 chunk len 0 bytes, H1 body len 0 bytes :

  0  
{"metadata":{"pagination":{"total":0,"rows":25,"currentPage":1,"pages"

  00070+ :0},"facets":[],"activeFacets":[]},"media":[]}
---

I also did this request with wget to see what the response should be, 
and it seems that this is the first part of the 297229 bytes long body. 
The response headers are:


---
  HTTP/1.1 200 OK
  Server: nginx
  Date: Fri, 19 Jul 2019 07:32:03 GMT
  Content-Type: application/json; charset=UTF-8
  Transfer-Encoding: chunked
  Vary: Accept-Encoding
  Vary: Accept-Encoding
  Cache-Control: private, must-revalidate
  ETag: "178c3f242b0151fe57e02f6e8817ce3a"
  Access-Control-Allow-Origin: *
  Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, PATCH, DELETE, 
HEAD

Length: unspecified [application/json]
---

Maybe the 'Length: unspecified' has something to do with it.

If I enable http-reuse the problem is still there. Only no option 
http-use-htx 'fixes' it.


I've stripped my config to the parts that I think are related:

---
global
master-worker
log /dev/loglocal0
log /dev/loglocal1 notice

daemon
userhaproxy
group   haproxy
maxconn 32768
spread-checks   3
nbproc  1
nbthread4
stats socket/var/run/haproxy.stat mode 666 level admin

	ssl-default-bind-ciphers 
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

ssl-default-bind-options no-sslv3 no-tls-tickets
	ssl-default-server-ciphers 
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

ssl-default-server-options no-sslv3 no-tls-tickets

tune.ssl.default-dh-param 2048

###
# Defaults
###
defaults
log global
timeout check   2s
timeout client  60s
timeout connect 10s
timeout http-keep-alive 4s
timeout http-request15s
timeout queue   30s
timeout server  60s
timeout tarpit  120s

errorfile 400 /etc/haproxy/errors.loc/400.http
errorfile 403 /etc/haproxy/errors.loc/403.http
errorfile 500 /etc/haproxy/errors.loc/500.http
errorfile 502 /etc/haproxy/errors.loc/502.http
errorfile 503 /etc/haproxy/errors.loc/503.http
errorfile 504 /etc/haproxy/errors.loc/504.http

frontend webservices
bind x.x.x.x:80 transparent
	bind x.x.x.x:443 transparent ssl crt /etc/haproxy/ssl/somecert.pem alpn 
h2,http/1.1

bind 2001:xxx:xxx:x::xx:80 transparent
	bind 2001:xxx:xxx:x::xx:443 transparent ssl crt 
/etc/haproxy/ssl/somecert.pem alpn h2,http/1.1


modehttp
maxconn 4096

option  httplog
option  dontlog-normal
option http-ignore-probes

Re: Random 502's and instant 504's after upgrading

2019-07-18 Thread Sander Klein

On 2019-07-18 09:15, Sander Klein wrote:

Hi,

Last night I tried upgrading from haproxy 1.9.8 to 2.0.2. After
upgrading I get random 502's and random instant 504's when visiting
pages.



Just tested with 'no option http-use-htx' in the defaults section and 
then my problems went away. Seems like a bug in HTX. Any info needed for 
this one?


Sander



0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Random 502's and instant 504's after upgrading

2019-07-18 Thread Sander Klein

Hi,

Last night I tried upgrading from haproxy 1.9.8 to 2.0.2. After 
upgrading I get random 502's and random instant 504's when visiting 
pages.


For the 502's I see the following in the log:

Jul 18 08:14:09 HOST haproxy[2003]: xxx:xxx:xxx:xxx:xxx::xxx 
[18/Jul/2019:08:14:09.133] cluster1-in~ cluster1/BACK1 0/0/0/-1/0 502 
1976 - - PH-- 382/129/8/5/0 0/0 {somesite.nl|Mozilla/5.0 
(Win|354|https://somesite.nl/stuff/goes/here/xxx} {} "POST 
/stuff/goes/here/xxx HTTP/2.0"
Jul 18 08:15:08 HOST haproxy[2003]: x.x.x.x:50004 
[18/Jul/2019:08:15:08.712] cluster1-in~ cluster1/BACK2 0/0/0/-1/0 502 
1976 - - PH-- 365/150/5/2/0 0/0 {somesite.nl|Mozilla/5.0 
(Win||https://somesite.nl/other/stuf/here/please/xxx} {} "GET 
/img/uploads/path/somejpeg.jpg HTTP/2.0"


The 504's are another thing, I do not see them logged at all. The only 
things I notice is that they are instant, so no timeout is reached.


Downgrading back to 1.9.8 fixes the problem again. I might try disabling 
htx later today to see what happens.


The backends are NGINX servers which talk plain http/1.1.

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: CPU Spikes

2019-07-15 Thread Sander Klein

On 2019-07-09 08:53, Sander Klein wrote:


It could be useful to issue "show activity" twice 1 second apart when
this happens, and maybe even "show fd" and "show sess all" if you 
don't

have too many connections.


Right, I will do the above steps. But, since this only happens on
Mondays we have to wait a bit ;-)


Drat, the harvester was early this week. So, it was already done when I 
arrived at work. I hope I can catch it next week.


Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Runaway process

2019-07-12 Thread Sander Klein

On 2019-07-12 04:27, Willy Tarreau wrote:


If you can at least show the backtrace, this could be useful and we
can see if the core would be needed or not. Maybe this will match
another known bug.


This is the BT of yesterday:

---
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 27066
[New LWP 27067]
[New LWP 27068]
[New LWP 27069]
[Thread debugging using libthread_db enabled]
Using host libthread_db library 
"/lib/x86_64-linux-gnu/libthread_db.so.1".
0x7f08655ef303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

84  ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) thread apply all bt

Thread 4 (Thread 0x7f084d058700 (LWP 27069)):
#0  0x7f08655ef303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

#1  0x562cea640f95 in ?? ()
#2  0x562cea6e6792 in ?? ()
#3  0x7f08c4a4 in start_thread (arg=0x7f084d058700) at 
pthread_create.c:456
#4  0x7f08655eed0f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:97


Thread 3 (Thread 0x7f084d859700 (LWP 27068)):
#0  0x562cea6af336 in ?? ()
#1  0x562cea73cd1d in si_cs_send ()
#2  0x562cea73d90a in si_update_both ()
#3  0x562cea6a1976 in process_stream ()
#4  0x562cea770728 in process_runnable_tasks ()
#5  0x562cea6e67c1 in ?? ()
#6  0x7f08c4a4 in start_thread (arg=0x7f084d859700) at 
pthread_create.c:456
#7  0x7f08655eed0f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:97


Thread 2 (Thread 0x7f084e05a700 (LWP 27067)):
#0  0x7f08655ef303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

#1  0x562cea640f95 in ?? ()
#2  0x562cea6e6792 in ?? ()
#3  0x7f08c4a4 in start_thread (arg=0x7f084e05a700) at 
pthread_create.c:456
#4  0x7f08655eed0f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:97


Thread 1 (Thread 0x7f0866e5c180 (LWP 27066)):
#0  0x7f08655ef303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

#1  0x562cea640f95 in ?? ()
#2  0x562cea6e6792 in ?? ()
#3  0x562cea63e96c in main ()
---

And today I had another one:

---
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 6982
[New LWP 6983]
[New LWP 6984]
[New LWP 6985]
[Thread debugging using libthread_db enabled]
Using host libthread_db library 
"/lib/x86_64-linux-gnu/libthread_db.so.1".
0x7fbdf0713303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

84  ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) thread apply all bt

Thread 4 (Thread 0x7fbdd817c700 (LWP 6985)):
#0  0x5606dd570457 in ?? ()
#1  0x5606dd5fdd1d in si_cs_send ()
#2  0x5606dd5ff45d in si_cs_io_cb ()
#3  0x5606dd6319a6 in process_runnable_tasks ()
#4  0x5606dd5a77c1 in ?? ()
#5  0x7fbdf17904a4 in start_thread (arg=0x7fbdd817c700) at 
pthread_create.c:456
#6  0x7fbdf0712d0f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:97


Thread 3 (Thread 0x7fbdd897d700 (LWP 6984)):
#0  0x7fbdf0713303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

#1  0x5606dd501f95 in ?? ()
#2  0x5606dd5a7792 in ?? ()
#3  0x7fbdf17904a4 in start_thread (arg=0x7fbdd897d700) at 
pthread_create.c:456
#4  0x7fbdf0712d0f in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:97


Thread 2 (Thread 0x7fbdd917e700 (LWP 6983)):
#0  0x7fbdf0713303 in epoll_wait () at 
../sysdeps/unix/syscall-template.S:84

#1  0x5606dd501f95 in ?? ()
#2  0x5606dd5a7792 in ?? ()
#3  0x7fbdf17904a4 in start_thread (arg=0x7fbdd917e700) at 
pthread_create.c:456
#4  0x7fbdf0712d0f in clone () at 

Re: Runaway process

2019-07-11 Thread Sander Klein

On 2019-07-11 12:27, Tim Düsterhus wrote:

Try attaching to the process with `gdb -p 12345` with 12345 being the
process ID. Then:

1. Get a backtrace for all threads: thread apply all bt
2. Generate a core file: generate-core-file

If you are also able to connect to the stats socket of that process 
then

the following might be helpful as well:

1. show info
2. show fd
3. show activity
4. show sess all


I've created the backtrace and the core file. I couldn't connect to the 
stats socket anymore so no info on that.


If a dev is interested I can send it.

Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Runaway process

2019-07-11 Thread Sander Klein

Hi,

I seem to have runaway HAProxy process since yesterday evening around 
20:50. This process is eating up 100% CPU continously. (HAProxy 1.9.8)


Of course I can just kill it and go on with my life, but I was wondering 
if there was any interest to see if we can uncover a bug here. If so, 
please let me know what you need from me.


Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: CPU Spikes

2019-07-09 Thread Sander Klein

Hey Willy,


On 2019-07-09 08:09, Willy Tarreau wrote:
What's you CPU like between the peaks ? 1%, 10%, 50% ? Just to get a 
rough

estimate of whether it's something reaching a critical point or if it's
something doing its mess alone in its corner.


In between the spikes it's about 7% System, 11% User, 6% Softirq, 76% 
Idle. Bandwidth is then about 500Mbit/s, mostly outbound.


What I didn't notice before, but just saw while staring at my graphs, is 
I get more incoming traffic during the CPU spikes. So, I'm doing about 
500Mbit/s, then the incoming traffic rises to about 100Mbit/s (probably 
a HTTP POST), CPU spikes, total traffic drops to about 200Mbit/s,  
everything starts getting slow.


I had HAProxy running on physical hardware with an E5-2407 and 1Gbit 
NIC. Now it is running as a VM on an E5-2650 with 10Gbit NIC. With the 
same issues.



Are you using threads ? I'm asking because I'm currently working on an
issue which I found could cause exactly this behaviour. I'm fairly 
certain

we've met it in the past without being able to attribute it to exactly
this.


Yes, I'm using threads.


If you're using threads, attaching gdb to the process and issuing "info
threads" will tell us where they are. If many of them are in
fd_update_events() or fd_may_recv(), you're likely on the one I've been
working on.

Other possibilities (due to the regularity of your observation) are :
  - timeouts (check in your conf if a 10s timeout appears somewhere,
maybe it triggers and is improperly caught)


I have the following timeouts in defaults:
timeout client  60s
timeout connect 10s
timeout http-keep-alive 4s
timeout http-request15s
timeout queue   30s
timeout server  60s
timeout tarpit  120s

Looking at the spikes again it's more like a 20 second up, 20 second 
down. But that probably has more to do with the POST taking that long.



  - health checks (maybe you have 10s checks, or 2s checks with 4
retries or I don't know what, which causes a special event to
occur after 10s)


Check are every 2s with a rise of 3 and a fall of 3.


In any case you're clearly facing a bug, but it's always difficult to
tell.

It could be useful to issue "show activity" twice 1 second apart when
this happens, and maybe even "show fd" and "show sess all" if you don't
have too many connections.


Right, I will do the above steps. But, since this only happens on 
Mondays we have to wait a bit ;-)


Regards,

Sander


0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


CPU Spikes

2019-07-08 Thread Sander Klein

Hi,

I'm having an issue with HAProxy causing CPU spikes with certain 
traffic.


We have a client who is downloading lots of URL's during the night. When 
the download starts there is not much other traffic going on and there 
doesn't seem to be any problem. But, when the morning comes, 'normal' 
traffic starts hitting HAProxy and every 10 seconds or so, HAProxy 
starts eating 100% of CPU while network traffic drops. When HAProxy 
stops eating CPU after 10 seconds, network traffic rises again. When the 
crawler is finished everything returns to normal. So it looks like some 
kind of mix of traffic which causes it.


I've tested it with HAProxy 1.8.20, 1.9.8 (which I am running by 
default) and 2.0.1. They all show the same behaviour. I also tried with 
2 different kernels to see if anything happens there. With kernel 4.9 
top show HAProxy using 100% CPU where 50% is user and 50% is system. 
With kernel 4.19 I see 100% CPU usage with 70% user and 50% system.


I also tried with disabling H2, splicing, and some regexes I use. Even 
tried new hardware, and moved it to a VM just to see if I could find any 
difference, but none...


Does anyone have a good idea how to troubleshoot this any further?

Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Using haproxy together with NFS

2018-08-03 Thread Sander Klein
Hi,

You might want to have a look at IPVS for instance in combination with 
Keepalived. You can then even use udp mounts if you want. 

Just my 2 cents.

Regards,

Sander 


> On 2 Aug 2018, at 18:40, Lucas Rolff  wrote:
> 
> I indeed removed the send-proxy - then I had to put the IP of haproxy in the 
> NFS exports file instead to be able to mount the share (which makes sense 
> seen from a NFS perspective).
> 
> Making the NFS server support proxy protocol, isn't something I think will 
> happen - I rely on the upstream packages (CentOS 7 packages in this case).
> 
> And using transparency mode - I think relying on stuff going via haproxy for 
> routing won't be a possibility in this case - so I guess I have to drop my 
> wish about haproxy + NFS in this case, I'd like something that is fairly 
> standard without too much modifications on the current NFS infrastructure 
> (since it would introduce more complexity).
> 
> Thanks for your replies both of you!
> 
> Best Regards,
> 
> On 02/08/2018, 18.09, "Willy Tarreau"  wrote:
> 
>>On Thu, Aug 02, 2018 at 04:05:24AM +, Lucas Rolff wrote:
>> Hi michael,
>> 
>> Without the send-proxy, the client IP in the export would have to be the
>> haproxy server in that case right?
> 
>That's it. But Michael is absolutely right, your NFS server doesn't support
>the proxy protocol, and the lines it emits below indicate it :
> 
>  Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: 
> RPC: fragment too large: 1347571544
>  Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: 
> RPC: fragment too large: 1347571544  
>  Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: 
> RPC: fragment too large: 1347571544
>  Aug 01 21:44:45 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: 
> RPC: fragment too large: 1347571544
> 
>This fragment size (1347571544) is "PROX" encoded in big endian, which are
>the first 4 chars of the proxy protocol header :-)
> 
>> The issue there is then, that I end up with all clients having access to
>> haproxy can suddenly mount all shares in nfs, which I would like to prevent
> 
>Maybe you can modify your NFS server to support the proxy protocol, that
>could possibly make sense for your use case ? Otherwise on Linux you may
>be able to configure haproxy to work in transparent mode using "source
>0.0.0.0 usesrc clientip" but beware that it requires some specific iptables
>rules to divert the traffic and send it back to haproxy. It will also 
> require
>that all your NFS servers route the clients via haproxy for the response
>traffic. This is not always very convenient.
> 
>Regards,
>Willy
> 
> 




Re: SNI matching issue when hostname ends with trailing dot

2018-07-27 Thread Sander Klein
Hi Warren,

As far as I know this is by design. If you do not want this behavior you need 
to use strict-sni in your bind statement. 

Regards

Sander


> On 27 Jul 2018, at 12:47, Warren Rohner  wrote:
> 
> Hi HAProxy list
> 
> Just thought I'd resend this report from May in case it was missed. If it's a 
> non-issue, I apologise.
> 
> Regards
> Warren
> 
> At 15:47 2018/05/22, Warren Rohner wrote:
>> Hi HAProxy list
>> 
>> We use an HAProxy 1.7.11 instance to terminate SSL and load balance 100+ 
>> websites.
>> 
>> The simplified bind line below specifies a default cert (i.e. 
>> secure.example.com.pem) as required in this HAProxy version, and a directory 
>> path to all other certs (i.e. ./):
>> 
>> bind 127.0.0.1:443 ssl crt secure.example.com.pem crt ./
>> 
>> This configuration works as expected. HAProxy finds all certs and the 
>> correct one is used when TLS SNI extension is provided. For example, 
>> visiting https://secure.example.com/ and https://www.example.com/ (with SNI 
>> capable web browser) both work perfectly.
>> 
>> The other day I inadvertently appended a trailing dot to the hostname for 
>> one of our sites (e.g. https://www.example.com.), and when I did this 
>> HAProxy returned the default cert to the browser rather than the expected 
>> cert for that particular site. I'm not certain, but could this be a possible 
>> bug in the HAProxy code that matches servername provided by browser's TLS 
>> SNI extension against all loaded certificates?
>> 
>> As a further example of problem, I note that the issue can be reproduced on 
>> the haproxy.org website as follows using OpenSSL client:
>> 
>> Works as expected, HAProxy returns correct cert for haproxy.org:
>> openssl s_client -connect www.haproxy.org:443 -servername www.haproxy.org
>> 
>> With trailing dot on servername, HAProxy returns what I think is the default 
>> cert (an invalid StarrCom-issued cert for formilux.org):
>> openssl s_client -connect www.haproxy.org:443 -servername www.haproxy.org .
>> 
>> Please let me know if I should provide any further information.
>> 
>> Regards
>> Warren


Re: Haproxy 1.8.4 400's with http/2

2018-02-22 Thread Sander Klein

Thanks Lukas,

It was indeed the option httpclose enabled only on that backend.

Greets,

Sander

On 2018-02-21 16:49, Lukas Tribus wrote:

Hello Sander,

make sure you use "option http-keep-alive" as http mode, specifically
httpclose will cause issue with H2.

If that's not it, please share the configuration; also you may want to
try enabling proxy_ignore_client_abort in the nginx backend [1].



cheers,
lukas


[1]
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort

On 21 February 2018 at 15:29, Sander Klein <roe...@roedie.nl> wrote:

Hi All,

Today I tried enabling http/2 on haproxy 1.8.4. After enabling all 
requests
to a certain backend started to give 400's while requests to other 
backend

worked as expected. I get the following in haproxy.log:

Feb 21 14:31:35 localhost haproxy[22867]:
2001:bad:coff:ee:cd97:5710:4515:7c73:52553 [21/Feb/2018:14:31:30.690]
backend-name/backend-04 1/0/1/-1/4758 400 1932 - - CH-- 518/215/0/0/0 
0/0

{host.name.tld|Mozilla/5.0
(Mac||https://referred.name.tld/some/string?f=%7B%22la_la_la%22:%7B%22v%22:%22thingy%22%7D%7D}
{} "GET /some/path/here/filename.jpg HTTP/1.1"

The backend server is nginx which proxies to a nodejs application. 
When

looking at the request on nginx it gives an HTTP 499 error.

Is this a known issue? Or, is this a new H2 related issue?

Anyway I can do some more troubleshooting?

Greets,

Sander


0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Haproxy 1.8.4 400's with http/2

2018-02-21 Thread Sander Klein

Hi All,

Today I tried enabling http/2 on haproxy 1.8.4. After enabling all 
requests to a certain backend started to give 400's while requests to 
other backend worked as expected. I get the following in haproxy.log:


Feb 21 14:31:35 localhost haproxy[22867]: 
2001:bad:coff:ee:cd97:5710:4515:7c73:52553 [21/Feb/2018:14:31:30.690] 
backend-name/backend-04 1/0/1/-1/4758 400 1932 - - CH-- 518/215/0/0/0 
0/0 {host.name.tld|Mozilla/5.0 
(Mac||https://referred.name.tld/some/string?f=%7B%22la_la_la%22:%7B%22v%22:%22thingy%22%7D%7D} 
{} "GET /some/path/here/filename.jpg HTTP/1.1"


The backend server is nginx which proxies to a nodejs application. When 
looking at the request on nginx it gives an HTTP 499 error.


Is this a known issue? Or, is this a new H2 related issue?

Anyway I can do some more troubleshooting?

Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: h2 bad requests

2017-12-28 Thread Sander Klein

Hi Lucas,

On 2017-12-28 22:38, Lucas Rolff wrote:

Hi Sander,

Which exact browser version do you use?

There’s an ongoing thread already
(https://www.mail-archive.com/haproxy@formilux.org/msg28333.html )
regarding the same issue.


I just noticed and was reading up.

I can reproduce this problem on Firefox Quantum 57.0.3, Chrome 
63.0.3239.84, Safari 11.0.2. All on OSX 10.12.6.


It only happens when I post something, but not every time, which makes 
it a bit fishy.


Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


h2 bad requests

2017-12-28 Thread Sander Klein

Hi,

I'm playing around with http2 on haproxy 1.8.2 but when I enable it I 
get HTTP 400's on some requests. When sending a show errors to the admin 
socket I get no errors at all. Disabling http2 makes the rror go away.


The logfile shows:

Dec 28 22:09:02 hostname haproxy[23043]: x.x.x.x:58219 
[28/Dec/2017:22:09:02.066] web~ nginx/nginx 0/0/2/-1/10 400 188 - - CH-- 
4/2/0/0/0 0/0 {something.nl|Mo
zilla/5.0 
(Mac|1695|https://something.nl/some/path/?_lala=option&_another=option} 
{} "POST /some/path/?_task=doit&_action=dothisaction HTTP/1.1"


I'm looking for a way to troubleshoot this. My config looks like:

global
log /dev/loglocal0
log /dev/loglocal1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS

  ssl-default-server-options no-sslv3 no-tls-tickets
ssl-default-server-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS

ssl-server-verify none
tune.ssl.default-dh-param 4096


defaults
log global
modehttp
option  httplog
option  dontlognull
timeout connect 5000
timeout client  5
timeout server  5
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http


frontend web
bind x.x.x.x:80
bind x.x.x.x:443 ssl crt /etc/haproxy/SSL/ strict-sni alpn 
h2,http/1.1

bind :xxx::xxx::1:80
bind :xxx::xxx::1:443 ssl crt /etc/haproxy/SSL/ 
strict-sni alpn h2,http/1.1


mode http
maxconn 4096

option httplog
option splice-auto

capture request header Host len 64
capture request header User-Agent   len 16
capture request header Content-Length   len 10
capture request header Referer  len 256
capture response header Content-Length  len 10

acl in_badstuff url_reg -i -f /etc/haproxy/filters/badstuff.reg
acl in_badstuff url_sub -i -f 
/etc/haproxy/filters/phpmyadmin.txt
acl in_badstuff hdr_sub(referer) -i -f 
/etc/haproxy/filters/referrer.txt

acl is_host_falco hdr_sub(Host) -i somehost.nl


use_backend badstuff if in_badstuff
use_backend nginx-plain if !{ ssl_fc }
use_backend nginx

backend nginx
fullconn 128
mode http

option abortonclose
option http-keep-alive

server nginx 127.0.0.1:443 ssl cookie nginx send-proxy

backend nginx-plain
fullconn 128
mode http

option abortonclose
option http-keep-alive

server nginxplain 127.0.0.1:80 cookie nginx-plain send-proxy

backend badstuff
  mode http
  errorfile 503 /etc/haproxy/errors/503.http

Greets,

Sander


0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] haproxy-1.8.0

2017-11-26 Thread Sander Klein

On 2017-11-26 19:57, Willy Tarreau wrote:

Hi all,

After one year of intense development and almost one month of 
debugging,
polishing, and cross-review work trying to prevent our respective 
coworkers
from winning the first bug award, I'm pleased to announce that haproxy 
1.8.0

is now officially released!


Woohoo! Thanks for the work.

Greets,

Sander Klein

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Experimental / broken HTTP/2 support

2017-10-16 Thread Sander Klein

On 2017-10-16 14:19, Willy Tarreau wrote:

On Mon, Oct 16, 2017 at 01:28:12PM +0200, Pavlos Parissis wrote:
I guess following step-by-step approach, 1st client side, it makes 
sense as

it reduces the size of breakage:-)


Yes but not only this. It's also the fact that the main benefits of H2 
are

on the client side, where the latency is the highest.


Which is funny, because I wanted to use H2 to connect to the server side 
since I have a situation (please, don't ask) where latency between 
HAProxy and the server is quite high. Looking at this I thought using H2 
to connect to the server might be a dirty/easy 'fix' to get this more 
acceptable.


Since latency on the client side is also high-ish in this setup, using 
H2 on the client side might help as well.


Regards,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Experimental / broken HTTP/2 support

2017-10-16 Thread Sander Klein

Hi Willy,

On 2017-10-15 19:02, Willy Tarreau wrote:

If everything goes well, the final rebased and cleaned up code should
be available for a release candidate by the end of the month.


Great, I will wait and see what you have available at the end of the 
month. I'm in no hurry, I just wanted to fiddle around.



Stay tuned!


I will!

Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Experimental / broken HTTP/2 support

2017-10-15 Thread Sander Klein

Hi,

I haven't been paying much attention to the list lately, but I am 
wondering what the current status of http/2 support is in 
1.8-(dev|snapshot).


Is it in a usable-but-needs testing state? Or more like 
stay-away-because-it-kills-kittens state?


Greets,

Sander

On 2017-08-18 16:49, Willy Tarreau wrote:

...well, I think everything is in the subject :-)

Hi, by the way!

I'm able to gateway http/2 traffic to www.haproxy.org and am getting 
logs

to prove it :

   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43740
[18/Aug/2017:15:56:51.282] www~ www/ -1/13/0/-1/18 0 15 - -
 1/1/0/0/0 0/0 http=1 ""
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.302] www~ www/www 0/0/58/18/104 200 36300 - -
CD-- 1/1/0/0/0 0/0 http=2 "GET / HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.415] www~ www/www 0/0/30/16/46 200 504 - - CD--
1/1/0/0/0 0/0 http=2 "GET /size.js HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.466] www~ www/www 0/0/30/16/46 200 215 - - CD--
12/12/11/11/0 0/0 http=2 "GET /size.css?1509x761 HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.491] www~ www/www 0/0/25/19/44 200 11198 - -
CD-- 13/13/12/12/0 0/0 http=2 "GET /img/HAProxy_mini_pub.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.492] www~ www/www 0/0/26/19/45 200 10443 - -
CD-- 12/12/11/11/0 0/0 http=2 "GET /img/POM_mini_pub.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.491] www~ www/www 0/0/28/19/47 200 7772 - - CD--
11/11/10/10/0 0/0 http=2 "GET /img/ALOHA_mini_pub.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.491] www~ www/www 0/0/29/22/51 200 1731 - - CD--
10/10/9/9/0 0/0 http=2 "GET /img/btn_donate_SM_eur.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.489] www~ www/www 0/0/29/24/53 200 3743 - - CD--
9/9/8/8/0 0/0 http=2 "GET /img/logo-med.png HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.490] www~ www/www 0/0/29/23/52 200 1729 - - CD--
8/8/7/7/0 0/0 http=2 "GET /img/btn_donate_SM_usd.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.500] www~ www/www 0/0/26/18/44 200 3220 - - CD--
7/7/6/6/0 0/0 http=2 "GET /img/haproxy-pmode.png HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.501] www~ www/www 0/0/26/18/44 200 2261 - - CD--
6/6/5/5/0 0/0 http=2 "GET /img/pwby.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.492] www~ www/www 0/0/26/31/58 200 19247 - -
CD-- 5/5/4/4/0 0/0 http=2 "GET /img/World_IPv6_launch_banner_256.png
HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.501] www~ www/www 0/0/25/24/49 200 396 - - CD--
4/4/3/3/0 0/0 http=2 "GET /img/fr-off.png HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.501] www~ www/www 0/0/25/25/50 200 441 - - CD--
3/3/2/2/0 0/0 http=2 "GET /img/en-off.png HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.514] www~ www/www 0/0/28/15/43 200 850 - - CD--
2/2/1/1/0 0/0 http=2 "GET /img/ipv6nok.gif HTTP/1.1"
   <134>Aug 18 15:56:51 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.525] www~ www/www 0/0/30/232/262 200 376 - -
CD-- 1/1/0/0/0 0/0 http=2 "GET /img/ipv6back.png HTTP/1.1"
   <134>Aug 18 15:57:11 haproxy[6566]: 127.0.0.1:43746
[18/Aug/2017:15:56:51.300] www~ www/ -1/2/0/-1/20489 0 99131 -
- cD-- 0/0/0/0/0 0/0 http=1 ""

Look at the accept dates for the request, many of them are grouped, and
there's this "http=2" field in the log indicating the on-wire format.

But you'll also note all the "CD--" flags, the "" etc...

The code more or less works. There are still some race conditions that
will occasionally cause some requests to time out, especially if you
build with "-DDEBUG_H2" which will emit a lot of printf.

At least now with this code in place I could understand what is wrong
and how it should be re-architected. There's still a lot of work to do
in this area (there are some design notes and contradictory thoughts
in doc/internals/h2.txt) but I thought that now that it's more or less
working and that I'm going to break it and restart it from scratch
differently, it could be nice that I share it for those curious who
want to play with it a bit.

DON'T PUT THIS IN PRODUCTION!!! There are a lot of unhandled errors,
there are occasional leaks due to certain races not being caught etc.
I'm not even going to put it myself in front of haproxy.org nor at
home. It may start a fire in your house, attract UFOs full of 
man-eating
aliens, or even make me temporarily smart, nothing you want to 
experience!


The design for now consists in demultiplexing the H2 streams from
the incoming connection, 

Re: ASML SW quote request for resale

2017-08-04 Thread Sander Klein
Hi Brigitta,

You are contacting the haproxy mailing list which is used for support. 

The haproxy gpl edition is free for use by anyone. But if you want commercial 
support you probably want to contact cont...@haproxy.com

Regards,

Sander

> On 4 Aug 2017, at 12:55, Brigitta Csaszar  wrote:
> 
> Dear Sir/ Madame
> 
> I'm Brigitta Csaszar from ASML Procurement and contacting you in case of some 
> SW prices for resale.
> ASML is one of the world’s leading manufacturers of chip-making machines with 
> 16.500 employee. Founded in the Netherlands in 1984, the company is publicly 
> traded on Euronext Amsterdam and NASDAQ under the symbol ASML. For further 
> information,  please visit https://www.asml.com/  .
> 
> Our enginers are develioping a  new virtual computing platform (VCP), that is 
> going to be sold to our endcustomers. Your,  bellow listed SW, would be built 
> into our VCP as part of our solution.  So due to that fact, we are intersted 
> in your resale prices and conditons:
> 
> Role
> 
> Name  Version License MultiplicityDesired metric
> Loadbalancing Haproxy 1.7.5 or later  LGPL/GPL1 instance per VCP 
> installation 1 license per VCP (virtual computing platform)
>  
> Would you please send me an offer on this SW license, in which you inform me, 
> about: 
>- your SW sales conditions, if your customer would like to resell your SW 
> to its endcustomers
>   - the list of your distirbutors for EMEA, who can sell your SW for resale
>   - your prices and confirmation that we can use these SW for resale
>- your maintenance option and prices ( I would be interested in the 
> annual, 3-years and 5 years maintance prices and conditions)
>- your SLAs; Modification Request Process and Problem Report Help Desk.
>   - validity of your prices
> 
> We are at the information gathering and planning phase, so at this moment I 
> would like to have prices on these basic numbers.
> 
> In  case of any questions regarding our required SW, please contact me.
> I would highly appreciate if you could send me your offer till Tuesday (7th 
> August) 17:00 PM
> 
> If you have any questions, do not hesitate to contact me.
> Thank you in advance.
> 
> Kind regards,
> Brigitta Csaszar
> ASML Senior Tactical Buyer - IT, Professional Services 
>  
> E-mail: brigitta.csas...@asml.com Phone: +36-1-778-7292
> 
> -- The information contained in this communication and any attachments is 
> confidential and may be privileged, and is for the sole use of the intended 
> recipient(s). Any unauthorized review, use, disclosure or distribution is 
> prohibited. Unless explicitly stated otherwise in the body of this 
> communication or the attachment thereto (if any), the information is provided 
> on an AS-IS basis without any express or implied warranties or liabilities. 
> To the extent you are relying on this information, you are doing so at your 
> own risk. If you are not the intended recipient, please notify the sender 
> immediately by replying to this message and destroy all copies of this 
> message and any attachments. Neither the sender nor the company/group of 
> companies he or she represents shall be liable for the proper and complete 
> transmission of the information contained in this communication, or for any 
> delay in its receipt.


Re: haproxy fails to properly direct connection to correct back end.

2017-07-30 Thread Sander Klein

Hi P S,

I have to say, the way you type your emails makes one really want to 
help you. You seem to be positive, constructive and I don't see any 
whining. And yes, I'm a sarcastic person.


So, for your first problem. I don't know what goes wrong, but with me if 
haproxy fails to start, it actually does give back the reason why it 
does actually tells me why it doesn't start. The complete error is 'See 
"systemctl status haproxy.service" and "journalctl -xe" for details.' 
and 'journalxtl -xe' gives it back nicely. Did you read the output?


Now, back to your original issue.

I've build a simple setup, added your config, tried to test it and:

root@wonko-the-sane:/etc/haproxy# haproxy -c -f /etc/haproxy/haproxy.cfg
[WARNING] 210/135916 (26036) : config : 'option forwardfor' ignored for 
backend 'nodejs' as it requires HTTP mode.
[WARNING] 210/135916 (26036) : config : 'option forwardfor' ignored for 
backend 'nodejs_test' as it requires HTTP mode.
[WARNING] 210/135916 (26036) : config : 'option http-no-delay' ignored 
for backend 'nodejs_test' as it requires HTTP mode.
[ALERT] 210/135916 (26036) : http frontend 'all' 
(/etc/haproxy/haproxy.cfg:16) tries to use incompatible tcp backend 
'nodejs' (/etc/haproxy/haproxy.cfg:1) as its default backend (see 
'mode').
[ALERT] 210/135916 (26036) : http frontend 'all' 
(/etc/haproxy/haproxy.cfg:16) tries to use incompatible tcp backend 
'nodejs_test' (/etc/haproxy/haproxy.cfg:8) in a 'use_backend' rule (see 
'mode').

[ALERT] 210/135916 (26036) : Fatal errors found in configuration.

I do not run anything on port 80, nor on 8090 since I do not have to, 
because *your config is simply broken*. May I suggest you Read The Fine 
Manual (tm)? 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html is a great 
source.


Also using the http-no-delay option is heavily discouraged.

Why the project is not hosted on Github, I simply do not know. I'm not 
the author and I do not get to choose. But to be honest, I couldn't care 
less where the project is hosted. The support on haproxy has always been 
great for me and I do not think github would have made that better. Just 
because it is different and maybe a bit old school for some, doesn't 
make it really bad. And funny thing is, the other project you are 
referring to, nginx, isn't on github either


Out of office replies to a mailinglist are indeed a bit braindead, but 
since I have the option to delete emails they do not bother me. Just 
delete them and *poof* go one with your life.


Happy HAProxying!

Sander


0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Certificate order

2017-04-06 Thread Sander Klein

Hi Sander,

On 2017-04-06 10:45, Sander Hoentjen wrote:

Hi guys,

We have a setup where we sometimes have multiple certificates for a
domain. We use multiple directories for that and would like the
following behavior:
- Look in dir A for any match, use it if found
- Look in dir B for any match, use it if found
- Look in dir .. etc

This works great, except for wildcards. Right now a domain match in dir
B takes precedence over a wildcard match in dir A.

Is there a way to get haproxy to behave the way I describe?


I have been playing with this some time ago and my solution was to just 
think about the order of certificate loading. I then found out that the 
last certificate was preferred if it matched. Not sure if this has 
changed over time.


Greets,

Sander

0x2E78FBE8.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: CalDav with HAProxy

2016-11-11 Thread Sander Klein

On 2016-11-11 15:28, Alexandre Besnard wrote:


I use HAProxy as a reverse proxy to terminate SSL connections towards
all my VMs. So far so good except with Owncloud and CalDav.

When Owncloud is hidden behind HAProxy, I am not able to configure my
CalDav account under the Calendar app in Mac OS X (it works fine on
iOS or Android). If I bypass HAProxy and terminate the connection
directly on Apache server on my Owncloud VM, I am able to add the
account in the OS X Calendar, hence why I suspect HAProxy being the
problem.

When HAProxy is in front of my Owncloud VM, I can see the following
happening in the Apache access logs:

10.10.10.118 - - [11/Nov/2016:14:12:54 +] "PROPFIND
/.well-known/caldav HTTP/1.1" 301 577 "-" "Mac+OS+X/10.11.6 (15G1108)
accountsd/113"
10.10.10.118 - - [11/Nov/2016:14:12:54 +] "PROPFIND / HTTP/1.1"
405 996 "-" "Mac+OS+X/10.11.6 (15G1108) accountsd/113"
10.10.10.118 - - [11/Nov/2016:14:12:55 +] "PROPFIND /caldav/v2
HTTP/1.1" 405 1002 "-" "Mac+OS+X/10.11.6 (15G1108) accountsd/113"
10.10.10.118 - - [11/Nov/2016:14:12:55 +] "PROPFIND
/principals/users/wikus/ HTTP/1.1" 405 1006 "-" "Mac+OS+X/10.11.6
(15G1108) accountsd/113"
10.10.10.118 - - [11/Nov/2016:14:12:55 +] "PROPFIND /principals/
HTTP/1.1" 405 1002 "-" "Mac+OS+X/10.11.6 (15G1108) accountsd/113"
10.10.10.118 - - [11/Nov/2016:14:12:55 +] "PROPFIND
/dav/principals/ HTTP/1.1" 405 1000 "-" "Mac+OS+X/10.11.6 (15G1108)
accountsd/113

and I am unable to explain it…. Do we need to have a specific conf for
CalDav ? (by the way Cardav has the same issue).


Not the most helpful answer, but I have haproxy running with no special 
config at all in front of owncloud. I have been using Caldav without 
problems on osx 10.7-10.12


Can you share your config without any sensitive information? And, what 
version of haproxy are you using?


Greets,

Sander



Re: Haproxy dont Work

2016-05-21 Thread Sander Klein


> On 21 mei 2016, at 20:19, Pavlos Parissis <pavlos.paris...@gmail.com> wrote:
> 
>> On 21/05/2016 05:29 μμ, Sander Klein wrote:
>> 
>>> On 21 mei 2016, at 17:01, PiBa-NL <piba.nl@gmail.com> wrote:
>>> 
>>> Op 21-5-2016 om 15:44 schreef Sander Klein:
>>>>> On 2016-05-21 14:53, Marc Iglesias Hernandez wrote:
>>>>> I need to know how to set haproxy for users when they have gone
>>>>> through the haproxy have your real IP address, and not the haproxy.
>>>> 
>>>> Please keep it on the list.
>>>> 
>>>> You've got 2 options
>>>> 
>>>> 1. Add and X-Forwarded-For header: 
>>>> https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#option%20forwardfor
>>>> 
>>>> 2. Use the proxy protocol. http://blog.haproxy.com/haproxy/proxy-protocol/
>>> 3. Or use tproxy, with firewall rules to divert reply traffic and this 
>>> config option : source 0.0.0.0 usesrc clientip 
>>> http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.7.html#4.2-source
>>>> 
>>>> It all depends a bit on that you are trying to achieve and what backend 
>>>> software you are using.
>>>> 
>>>> Greets,
>>>> 
>>>> Sander
>>> But can i assume 'Haproxy DOES Work' on your machine? Getting it to run and 
>>> forward some basic traffic would be something to do before looking into 
>>> details like a client ip imho ..
>>> 
>>> Anyway if that works there are 3 options for making a backend aware of the 
>>> original a client ip:), each with their own (dis-)advantage. 
>>> https://gist.github.com/PiBa-NL/d826e0d6b35bbe4a5fc3
>> 
>> Ah yes, that's an option I never used with haproxy. I will build a test 
>> setup with that just to see how well it works.
> 
> You need to use iptables for this on HAProxy box, right?
> If that's the case then prepare for a drop of capacity on your HAProxy box.
> It also complicates your setup, troubleshooting problems at 03:00am could 
> lead to long outages.

I figure it has the same performance hit as using netfilter with IPVS.



Re: Haproxy dont Work

2016-05-21 Thread Sander Klein

On 2016-05-21 19:25, Marc Iglesias Hernandez wrote:

Hello?

2016-05-21 18:30 GMT+02:00 Marc Iglesias Hernandez
:


You've done the test configuration?


Please I asked you before, keep it on the list.

Would you be so kind to read 
http://linux.sgms-centre.com/misc/netiquette.php before we continue our 
conversation?


Greets,

Sander



Re: Haproxy dont Work

2016-05-21 Thread Sander Klein

> On 21 mei 2016, at 17:01, PiBa-NL <piba.nl@gmail.com> wrote:
> 
> Op 21-5-2016 om 15:44 schreef Sander Klein:
>> On 2016-05-21 14:53, Marc Iglesias Hernandez wrote:
>>> I need to know how to set haproxy for users when they have gone
>>> through the haproxy have your real IP address, and not the haproxy.
>> 
>> Please keep it on the list.
>> 
>> You've got 2 options
>> 
>> 1. Add and X-Forwarded-For header: 
>> https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#option%20forwardfor
>> 
>> 2. Use the proxy protocol. http://blog.haproxy.com/haproxy/proxy-protocol/
> 3. Or use tproxy, with firewall rules to divert reply traffic and this config 
> option : source 0.0.0.0 usesrc clientip 
> http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.7.html#4.2-source
>> 
>> It all depends a bit on that you are trying to achieve and what backend 
>> software you are using.
>> 
>> Greets,
>> 
>> Sander
> But can i assume 'Haproxy DOES Work' on your machine? Getting it to run and 
> forward some basic traffic would be something to do before looking into 
> details like a client ip imho ..
> 
> Anyway if that works there are 3 options for making a backend aware of the 
> original a client ip:), each with their own (dis-)advantage. 
> https://gist.github.com/PiBa-NL/d826e0d6b35bbe4a5fc3

Ah yes, that's an option I never used with haproxy. I will build a test setup 
with that just to see how well it works.



Re: Haproxy dont Work

2016-05-21 Thread Sander Klein

On 2016-05-21 14:53, Marc Iglesias Hernandez wrote:

I need to know how to set haproxy for users when they have gone
through the haproxy have your real IP address, and not the haproxy.


Please keep it on the list.

You've got 2 options

1. Add and X-Forwarded-For header: 
https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#option%20forwardfor


2. Use the proxy protocol. 
http://blog.haproxy.com/haproxy/proxy-protocol/


It all depends a bit on that you are trying to achieve and what backend 
software you are using.


Greets,

Sander



Re: Haproxy dont Work

2016-05-21 Thread Sander Klein

On 2016-05-21 13:16, Marc Iglesias Hernandez wrote:

Hello, I am trying to install HAProxy on a VPS with OVH kernel "Linux
vps81430.vps.ovh.ca [1] 3.10.0-327.18.2.el7.x86_64 # 1 SMP Thu Jul 12
11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU / Linux "but it does not
work.

How could I fix it?


Great, with this much info, I would suggest trying to wave around a dead 
chicken above your head while dancing the side-to-side on Cindy Laupers 
'Girls Just Want to Have Fun' and try again after about 30 minutes. 
Unless you try this on a sunday, then 60 minutes might be needed.


Or

You give us a little more info. For instance any error you are getting, 
the config you use, the Haproxy version. You know, the usual stuff.


Greets,

Sander Klein



RE: ssl parameters ignored

2015-11-26 Thread Sander Klein

Hi,

On 2015-11-26 01:17, Lukas Tribus wrote:

Sander, I can't reproduce what you are saying about the actual SSL
configuration though; no-sslv3 no-tlsv10 no-tlsv11 works as expected
for me (only tlsv1.2 possible). Please double check (curl -kv --tlsv1.1
https://localhost).


I must have had a brainfart during my testing. Indeed, it does disable 
tls 1.0 and tls 1.1 and works as advertised.


So, the only problem is/was the warning.

Greets,

Sander



Re: ssl parameters ignored

2015-11-24 Thread Sander Klein

Hi Nenad,

On 2015-11-24 16:15, Nenad Merdanovic wrote:

Can you post a minimal configuration (or full) which reproduces this?


Yes, here it is:

global
log /dev/loglocal0
log /dev/loglocal1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
#ca-base /etc/ssl/certs
#crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
	ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-server-verify none
tune.ssl.default-dh-param 4096

defaults
log global
modehttp
option  httplog
option  dontlognull
timeout connect 5000
timeout client  5
timeout server  5
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

frontend web
bind x.x.x.x:80
bind x.x.x.x:443 ssl crt /etc/haproxy/SSL/ strict-sni
bind x:x:x::x:80
bind x:x:x::x:443 ssl crt /etc/haproxy/SSL/

mode http
maxconn 4096

option httplog
option splice-auto

capture request header Host len 64
capture request header User-Agent   len 16
capture request header Content-Length   len 10
capture request header Referer  len 256
capture response header Content-Length  len 10

use_backend nginx

backend nginx
fullconn 128
mode http

option abortonclose
option http-keep-alive

server nginx 127.0.0.1:443 ssl cookie nginx send-proxy



RE: ssl parameters ignored

2015-11-24 Thread Sander Klein

Hi,

On 2015-11-23 22:36, Lukas Tribus wrote:

Are you sure that the executable was cleanly build (first "make clean",
only then "make ...")?


I don't know. I got pre made packages from "http://haproxy.debian.net 
jessie-backports-1.6 main" maintained by Vincent Bernat if I'm correct.



Can you elaborate what kind of OS we are talking about, and where the
openssl lib comes from (is it just a openssl-dev package from the
repository, or a custom build? static or shared?)


It is a standard Debian 8 installation with nothing fancy. Just the 
haproxy package from the repo above.


Sander



ssl parameters ignored

2015-11-23 Thread Sander Klein

Hi All,

I'm running haproxy 1.6.2 and it seems it ignores the values given with 
ssl-default-bind-options and/or ssl-default-server-options.


I have the following in my global conf:

ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS



When testing this config I get:

[ALERT] 326/202736 (24201) : SSLv3 support requested but unavailable.
Configuration file is valid

After testing with ssllabs I also noticed tlsv10 and tlsv11 were still 
enabled. Downgrading to haproxy 1.5.14 removes the error when testing 
the config and shows the tls protocols as disabled when using ssllabs.


Did something change betweern 1.5 and 1.6 so my config doesn't work 
anymore?


Greets,

Sander



Re: ssl parameters ignored

2015-11-23 Thread Sander Klein

Hey Lukas,

On 2015-11-23 21:27, Lukas Tribus wrote:
1.5.15 is probably affected as well (the error above comes from a build 
fix

for libssl that has been backported to 1.5).


Heh, didn't notice that release, else I would have tested with that 
one...


Can you provide "haproxy -vv" output of both 1.5.14 and 1.6.2 
executables?


Yes!

[ALERT] 326/214402 (27635) : SSLv3 support requested but unavailable.
HA-Proxy version 1.6.2 2015/11/03
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector-strong -Wformat 
-Werror=format-security -D_FORTIFY_SOURCE=2

  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 
200


Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.1k 8 Jan 2015
Running on OpenSSL version : OpenSSL 1.0.1k 8 Jan 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

---

HA-Proxy version 1.5.14 2015/07/02
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector-strong -Wformat 
-Werror=format-security -D_FORTIFY_SOURCE=2

  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 
200


Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1k 8 Jan 2015
Running on OpenSSL version : OpenSSL 1.0.1k 8 Jan 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Greets,

Sander





Microsoft Edge 408

2015-09-24 Thread Sander Klein

Hi,

I have some clients that complain about getting 408 errors with 
Microsoft Edge. I haven't been able to catch such a request yet, but I 
am wondering if this is the same as the Google Chrome preconnect 
problem.


Anyone by any chance got the same experience or any ideas on this?

Greets,

Sander



Re: Question regarding haproxy nagios setup

2015-06-19 Thread Sander Klein

On 2015-06-19 16:08, Mauricio Aguilera wrote:

El problema es por el ; antes del csv de la url

Tengo el mismo problema y pude detectar que
Nagios corta ahí el comando y
obviamente se ejecuta mal,
intenté pasarle los valores con   y ' ', pero nada...

Se les ocurre algo?


Me gustaría tratar de hacer esta pregunta de nuevo en Inglés. Dado que 
la mayoría de la población mundial no habla español.





Re: HA proxy configuartion

2015-05-04 Thread Sander Klein

On 2015-05-04 07:35, ANISH S IYER wrote:

Hi

while configuring Ha proxy.

mv /etc/haproxy/haproxy.cfg{,.original}

what is the meaning of this line. what you mean by original


It will move the file haproxy.cfg to haproxy.cfg.original. So, it is the 
same as mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.original


Sander



Re: HA proxy configuartion

2015-05-04 Thread Sander Klein

Hey,

please keep it on the list...

On 2015-05-04 10:19, ANISH S IYER wrote:

Hi
thanks for your fast replay

after configuring the HA proxy

the log file seems like

May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_in started.
May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_in started.
May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_http started.
May  4 03:42:00 discourse haproxy[3590]: Proxy haproxy_http started.
May  4 03:42:00 discourse haproxy[3590]: Proxy admin started.
May  4 03:42:00 discourse haproxy[3590]: Proxy admin started.
May  4 03:42:00 discourse haproxy[3590]: Server haproxy_http/apache is
DOWN, reason: Layer4 connection problem, info: Connection refused,
check duration: 0ms. 0 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.
May  4 03:42:00 discourse haproxy[3590]: Server haproxy_http/apache is
DOWN, reason: Layer4 connection problem, info: Connection refused,
check duration: 0ms. 0 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.
May  4 03:42:00 discourse haproxy[3590]: backend haproxy_http has no
server available!
May  4 03:42:00 discourse haproxy[3590]: backend haproxy_http has no
server available!


The problem appears to be this:

May  4 03:42:00 discourse haproxy[3590]: Server haproxy_http/apache is
DOWN, reason: Layer4 connection problem, info: Connection refused,
check duration: 0ms. 0 active and 0 backup servers left. 0 sessions
active, 0 requeued, 0 remaining in queue.

Haproxy cannot connect to your backend servers. Maybe you are using the 
wrong ip/port or some firewall is bugging you.


Sander



Re: Help haproxy

2015-02-02 Thread Sander Klein

On 02.02.2015 12:09, Mathieu Sergent wrote:

Hi,

I try to set up a load balancing with HAProxy and 3 web servers.
I want to receive on my web servers the address' client.
I read that it is possible with the option  source ip usesrc   but
you need to be root.
If you want to not be root, you have to used  HAProxy with Tproxy.
But Tproxy demand too much system configuration.
There is an other solution ?
I hope that you have understood my problem.

Yours sincerely.

Mathieu Sergent

PS : Sorry for my English.


Your English is no problem. ;-)

You can add an X-Forwarded-For header using haproxy. If you then use 
mod_rpaf for apache or realip on nginx you can easily substitute the 
loadbalancer ip with the ip of the client.


Regards,

Sander




Re: Help haproxy

2015-02-02 Thread Sander Klein

Hi Mathieu,

Pleas keep the list in the CC.

On 02.02.2015 15:26, Mathieu Sergent wrote:

Thanks for your reply.

I just used the option forwardfor in the haproxy configuration. And i
can find client's address from my web server (with tcpdump).
But if i don't use the option forwardfor, the web server still find
the client's address. That's make any sense ?


To be honest, that doesn't make any sense to me. Are you sure you have 
reloaded the haproxy process after you removed the forwardfor?


Or, could it be you are using the proxy protocol (send-proxy)?

Greets,

Sander



Re: Help haproxy

2015-02-02 Thread Sander Klein

On 02.02.2015 16:33, Mathieu Sergent wrote:

Hi Sander,

Yes i reloaded the haproxy and my web server too. But no change.
 And i'm not using proxy protocol.

To give you more precisions, on my web server i used tcpdump functions
which give me back the header of the requete http. And in this i found
my client's address.
But this is really strange that i can do it without the forwardfor.


The only other thing that I can think of is that your client is behind a 
proxy server which adds the X-Forward-For header for you...


Or you got something strange in your config...

Sander



Re: Serveur Haproxy

2015-01-20 Thread Sander Klein

On 20.01.2015 10:54, andriatsiresy johary wrote:

J'ai mis en place un système de load balancing d'un cluster de base
de données, avec HAProxy, sur une debian 7, j'ai activer la page de
statistique de HAProxy et je ne sais pas ou trouver le code source de
ce page, pourriez-vous m'aider s'il vous plait. Merci


De statistics pagina word gegenereerd door het haproxy proces zelf. Er 
is dus niet een html bestand die elke keer ge-update word.


Of zoek je echt de plek binnen de haproxy source code waar deze pagina 
opgebouwd word?


Sander



Regex

2014-12-01 Thread Sander Klein

Hi,

I'm testing some stuff with quite a big regex and now I am wondering 
what would be more efficient. Is it more efficient to load the regex 
with -i or is it better to specify it in the regex


So,

-i (some|words)

or

((S|s)(O|o)(M|m)(E|e)|(W|w)(O|o)(R|r)(D|d)(S|s))

Greets,

Sander



Re: Just had a thought about the poodle issue....

2014-10-20 Thread Sander Klein

On 18.10.2014 16:37, David Coulson wrote:

You mean like this?

http://blog.haproxy.com/2014/10/15/haproxy-and-sslv3-poodle-vulnerability/


On 10/18/14, 10:34 AM, Malcolm Turnbull wrote:
I was thinking Haproxy could be used to block any non-TLS 
connection

Like you can with iptables:
https://blog.g3rt.nl/take-down-sslv3-using-iptables.html

However it would be nice if you had users trying to connect via IE6/7
etc on XP to display a nice message like, please upgrade to a secure
browser chrome or firefox etc?

Is that easy to do?


Is something like this also possible with SNI or strict-SNI enabled? I 
would like to issue a message when a browser doesn't support SNI.


Sander



Re: [ANNOUNCE] haproxy-1.5.0

2014-06-20 Thread Sander Klein

On 19.06.2014 21:54, Willy Tarreau wrote:

Hi everyone,

The list has been unusually silent today, just as if everyone was 
waiting

for something to happen :-)

Today is a great day, the reward of 4 years of hard work. I'm 
announcing the

release of HAProxy 1.5.0.


Congratulations!

Now people can finally stop bugging me about using dev versions in 
production, lets upgrade to 1.6-dev0 ;-)


Sander



Re: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-19 Thread Sander Klein

On 19.05.2014 06:51, Willy Tarreau wrote:

Hi Rémi,

On Mon, May 12, 2014 at 06:34:01PM +0200, Remi Gacogne wrote:

Hi,

On 05/05/2014 12:06 PM, Sander Klein wrote:

 I've added a 2048bit dhparam to my most used certificates and I don't
 see a big jump in resource usage.

 This was not a big scientific test, I just added the DH params in my
 production and looked if the haproxy process started eating more CPU. As
 far as I can tell CPU usage went up just a couple percent. Not a very
 big deal.

 So, to me using 2048bit doesn't seem like a problem. And. I can
 always switch to nbproc  1 ;-)

Thank you Sander for taking the time to do this test! I am still not
sure it is a good idea to move a default of 2048 bits though.

Here is a new version of the previous patch that should not require
OpenSSL 0.9.8a to build, but instead includes the needed primes from
rfc2409 and rfc3526 if OpenSSL does not provide them. I have to admit 
I

don't have access to an host with an old enough OpenSSL to test it
correctly. It still defaults to use 1024 bits DHE parameters in order
not to break anything.

Willy, do you have any thoughts about this patch or any other way to
simplify the use of stronger DHE parameters in haproxy 1.5? I know it
can already be done by generating static DH parameters, but I am 
afraid

most administrators may find it too complicated and therefore not dare
to test it.


I'd have applied a very simple change to your patch : I'd have 
initialized
global.tune.ssl_max_dh_param to zero by default, and emitted a warning 
here :


+   if (global.tune.ssl_max_dh_param = 1024) {
+   /* we are limited to DH parameter of 1024 bits 
anyway */

+   Warning(Setting global.tune.ssl_max_dh_param
to 1024 by default, if your workload permits it you should set it to
at least 2048. Please set a value = 1024 to make this warning
disappear.);
+   global.tune.ssl_max_dh_param = 1024;
+   dh = ssl_get_dh_1024();
+   if (dh == NULL)
+   goto end;

What do you think ? That way it seems like only people really using the 
default

value will get the warning.


What happens if you also have DH appended to your certificates? You set 
global.tune.ssl_max_dh_param to 1024 but you have a 4096bit DH in your 
certificate file, which one is used then? An answer could be 'Don't do 
that' :-) but I was curious.


Greets,

Sander



RE: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-05 Thread Sander Klein

On 02.05.2014 16:52, Lukas Tribus wrote:

Hi Remi,




The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).



Please note that Sander used 4096bit - which is why he saw huge CPE 
load.


Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 
2048bit

dhparam manually (in the cert file).



I've added a 2048bit dhparam to my most used certificates and I don't 
see a big jump in resource usage.


This was not a big scientific test, I just added the DH params in my 
production and looked if the haproxy process started eating more CPU. As 
far as I can tell CPU usage went up just a couple percent. Not a very 
big deal.


So, to me using 2048bit doesn't seem like a problem. And. I can 
always switch to nbproc  1 ;-)


Greets,

Sander



RE: [PATCH] Add a configurable support of standardized DH parameters = 1024 bits, disabled by default

2014-05-02 Thread Sander Klein

On 02.05.2014 16:52, Lukas Tribus wrote:

Hi Remi,




The default value for max-dh-param-size is set to 1024, thus keeping
the current behavior by default. Setting a higher value (for example
2048 with a 2048 bits RSA/DSA server key) allows an easy upgrade
to stronger ephemeral DH keys (and back if needed).



Please note that Sander used 4096bit - which is why he saw huge CPE 
load.


Imho we can default max-dh-param-size to 2048bit.


Best thing would be if Sander could test in his environment with a 
2048bit

dhparam manually (in the cert file).


I'll try to test around a bit this weekend.

Sander



RE: CPU increase between ss-20140329 and ss-20140425

2014-04-26 Thread Sander Klein

Hey All,

Sorry for my late response, but we have a national holiday here... 
'Kings day' would be the translation ;-)


On 26.04.2014 13:53, Lukas Tribus wrote:

Hi,



- recommit the patch I submitted as it is, and let users concerned 
with

the CPU impact use static DH parameter in the certificate file.


What do you mean by use static DH parameter in the cert file ? Is 
this
something the user can decide after the cert is emitted ? Is it 
something

easy to do ?


Yes, Emeric's hard-coded dhparams or Remi's automated dhparams are only 
a

fallback in case the crt file doesn't contain dhparams.

The file needs to look like:
crt /path/to/cert+privkey+intermediate+dhparam

Whereas the dhparam are simply the result of:
 openssl dhparam 1024/2048/...


Also, one important thing to understand here is that this matters only 
with
*_DHE_* cihpers. Its not used with legacy non-PFS RSA cihpers or with 
ECDHE

ciphers.

For example not a single browser uses _DHE_ ciphers on demo.1wt.eu [2], 
so
the problem would never show (unless an attackers uses DHE deliberately 
to

saturate the servers CPU).


Sander, can you tell us your exact cipher configuration? It may be
suboptimal. I would recommend the configuration from [3]. Do you
have a lot of Java 6 clients connecting to this service btw?


My cipher config is:

ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

I've disabled sslv3 and use certificates with 4096bits keys. I know 4096 
bits keys are a bit over the top, but while testing the impact seemed to 
be acceptible so I thought 'What the heck, let's just use it'


I'll have a look at the recommended config from [3].

I don't think there are a lot of java clients connecting. We do expose 
some api's which might be accessed by java clients, but that wouldn't be 
more than 1% of the clients.



Also check if tls-tickets and ssl-session caching works correctly.


ssllabs says ssl resumption (caching) and ssl resumption (tickets) are 
working.


Greets,

Sander



RE: CPU increase between ss-20140329 and ss-20140425

2014-04-26 Thread Sander Klein

On 26.04.2014 16:07, Lukas Tribus wrote:

Hi,


I've disabled sslv3 and use certificates with 4096bits keys. I know 
4096
bits keys are a bit over the top, but while testing the impact seemed 
to

be acceptable so I thought 'What the heck, let's just use it'


Thats it, with Remi's patch your dhparam was upgraded to 4096bit, we
assumed they have been upgraded to 2048bit only.

DHE with 4096bit keys and dhparam will clearly kill performance.


Drat, so my nice labtest with haproxy and different key sizes was 
completely useless :-) It does explain why I didn't understand the 
problem with 4096bit keys.


Sander



CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

Hi,

I noticed a dramatic increase in CPU usage between HAProxy ss-20140329 
and ss-20140425. With the first haproxy uses around 20% of CPU and with 
the latter it eats up 80-90% of cpu and sites start to become sluggish. 
Health checks take much more time to complete 1100ms vs 2ms normal.


Nothing in my config has changed and when I downgrade everything returns 
to normal.


Info about haproxy:

haproxy -vvv
HA-Proxy version 1.5-dev23-3c1b5ec 2014/04/24
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 
USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 
200


Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.




Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

Hey Willy,

On 25.04.2014 14:39, Willy Tarreau wrote:

On Fri, Apr 25, 2014 at 02:12:23PM +0200, Sander Klein wrote:

Hi,

I noticed a dramatic increase in CPU usage between HAProxy ss-20140329
and ss-20140425. With the first haproxy uses around 20% of CPU and 
with
the latter it eats up 80-90% of cpu and sites start to become 
sluggish.

Health checks take much more time to complete 1100ms vs 2ms normal.

Nothing in my config has changed and when I downgrade everything 
returns

to normal.


I really don't like that at all :-(

I remember that you were running with gzip compression enabled in your 
config,

is that still the case ? It would be possible that in the past very few
responess were compressed due to the bug forcing us to disable 
compression
of chunked-encoded objects, and that now they're compressed and eat all 
the

CPU. In order to be sure about this, could you please try to disable
compression in your config just as a test ? Otherwise, despite the 
numerous

changes, I see very few candidates for such a behaviour :-/


I currently don't have compression enabled in my config. I disabled it 
some time ago because of CPU usage ;-)


With the current snapshot I do get some warnings:

[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:185] : 
a 'http-request' rule placed after a 'reqxxx' rule will still be 
processed before.
[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:240] : 
a 'block' rule placed after an 'http-request' rule will still be 
processed before.


But I don't suspect those are the issue.

Just to make sure I didn't give you a bogus report is 
upgraded/downgraded a couple of times, but every time I install 20140425 
the CPU spikes and sites become sluggish.


Care for my config?

Sander



Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

On 25.04.2014 15:46, Willy Tarreau wrote:

Just to make sure I didn't give you a bogus report is
upgraded/downgraded a couple of times, but every time I install 
20140425

the CPU spikes and sites become sluggish.


OK. Does it happen immediately or does it take some time ?


It happens immediately. It might take some time 0-10 seconds before I 
see the health checks jump in check time. Every time it's another check 
that spikes in time. It's not like they are all continuously high with 
latency.





Care for my config?


Sure!


In a separate mail ;-)

Greets,

Sander



Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

On 25.04.2014 15:46, Willy Tarreau wrote:

On Fri, Apr 25, 2014 at 03:34:14PM +0200, Sander Klein wrote:

I currently don't have compression enabled in my config. I disabled it
some time ago because of CPU usage ;-)


Ah too bad, it would have been an easy solution!


With the current snapshot I do get some warnings:

[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:185] 
:

a 'http-request' rule placed after a 'reqxxx' rule will still be
processed before.
[WARNING] 114/152646 (12418) : parsing [/etc/haproxy/haproxy.cfg:240] 
:

a 'block' rule placed after an 'http-request' rule will still be
processed before.

But I don't suspect those are the issue.


No they're unrelated, the check was not made in the past and could
lead to possibly erroneous configs, so now the warning tells you
how your current configuration is understood and used (hint: just
move your reqxxx rules *after* http-request rules, then move the
block rules before http-request and the warning will go away).


Just to make sure I didn't give you a bogus report is
upgraded/downgraded a couple of times, but every time I install 
20140425

the CPU spikes and sites become sluggish.


OK. Does it happen immediately or does it take some time ?


Care for my config?


Sure!


I've done a search and it breaks between 20140413 and 20140415.

Greets,

Sander



Re: CPU increase between ss-20140329 and ss-20140425

2014-04-25 Thread Sander Klein

On 25.04.2014 17:22, Willy Tarreau wrote:

On Fri, Apr 25, 2014 at 04:56:06PM +0200, Sander Klein wrote:

I've done a search and it breaks between 20140413 and 20140415.


OK, that's already very useful. I'm assuming this covers the period
between commits 01193d6ef and d988f2158. During this period, here's
what changed that could possibly affect your usage, even if unlikely :

  - replacements for sprintf() using snprintf() : it would be possible
that some of them would be mis-computed and result in a wrong size
causing something to loop over and over. At first glance it does
not look like this but it could be ;

  - getaddrinfo() is used by default now and you have the build option 
for
it. Your servers are referenced by their IP addresses so I don't 
see

why that could fail. Still it's possible to disable this by setting
the global statement nogetaddrinfo in the global section if you 
want

to test. It's highly unlikely that it could be related but it could
trigger a corner case bug somewhere else.

  - ssl: Add standardized DH parameters = 1024 bits
(I still don't understand what this is about, I'm clearly far from
being even an SSL novice). I have no idea whether it can be related
or not, but at least you're using SSL so everything is possible.

  - fix conversion of ipv4 to ipv6 in stick tables : you don't have 
any.


  - language converter : you don't have it

  - unique-id : you don't have it

  - crash on sc_tracked : you don't use it.

Thus given your setup, I'd start with the think I understand the least
which is the SSL change. Could you please revert the attached patch
by applying it with patch -Rp1 ?



Well, I can confirm that reverting that patch fixes my issue. Got 
20140415 running now and CPU usage is normal.


Greets,

Sander



Re: Generating a haproxy cluster

2014-03-26 Thread Sander Klein

Hi

On 24.03.2014 18:35, Andy Walker wrote:

For what it's worth, haproxy can be running on a server, and listening
on IP addresses that aren't actually associated with that server. In
linux, just make sure NET.IPV4.IP_NONLOCAL_BIND is set to 1, and
this will allow haproxy to bind to addresses that aren't currently
associated with that server. This is handy for very basic HA solutions
like keepalived, where you may just want the HA service managing IPs,
and not necessarily turning on and off haproxy as well.


Just to add a little note to this, you could also use 'transparent' in 
your bind directive instead of the NET.IPV4.NONLOCAL_BIND setting. This 
way the config will work with both IPv4 and IPv6.


Regards,

Sander



Re: Generating a haproxy cluster

2014-03-26 Thread Sander Klein

Hey,

On 26.03.2014 12:17, Jarno Huuskonen wrote:

Hi,

On Wed, Mar 26, Sander Klein wrote:

Hi

On 24.03.2014 18:35, Andy Walker wrote:
For what it's worth, haproxy can be running on a server, and listening
on IP addresses that aren't actually associated with that server. In
linux, just make sure NET.IPV4.IP_NONLOCAL_BIND is set to 1, and
this will allow haproxy to bind to addresses that aren't currently
associated with that server. This is handy for very basic HA solutions
like keepalived, where you may just want the HA service managing IPs,
and not necessarily turning on and off haproxy as well.

Just to add a little note to this, you could also use 'transparent'
in your bind directive instead of the NET.IPV4.NONLOCAL_BIND
setting. This way the config will work with both IPv4 and IPv6.


Also one option could be to bind all addresses to 'lo' interface
(http://comments.gmane.org/gmane.comp.web.haproxy/7317)

(this seems to work for ipv6 addresses). I have something like
this in /etc/sysconfig/haproxy:

#
# UEF: add ipv6 addrs to lo
#
LOADDRS=('2001:xyz:xyz:xyz::bad:201/64' '2001:xyz:xyz:xyz::bad:202/64')
for addr in ${LOADDRS[@]}; do
/sbin/ip -6 addr show lo | /bin/grep -q ${addr}  /dev/null
if [ $? -ne 0 ]; then
/sbin/ip -6 addr add ${addr} dev lo
fi
done

(and keepalived manages/adds those addresses to ethX interface).


I had that indeed (since I'm in that thread you're referencing ;-) ) but 
sometimes ran into problem when using this in combination with 'vmac' on 
keepalived. The transparent option is much cleaner since it makes sure 
you don't run into sudden mac leakage or other self inflicted 
stupidities.


Greets,

Sander



Re: System tuning for Haproxy

2014-03-12 Thread Sander Klein

On 12.03.2014 10:36, William Lewis wrote:

Hi,

I’m looking for any advice in tuning kernel parameters for haproxy.

Current sysctl.conf is

net.ipv4.icmp_echo_ignore_broadcasts = 1
fs.file-max = 800
vm.swappiness = 20
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 32768
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.ip_local_port_range = 4096 65535
net.ipv4.tcp_sack = 0
net.ipv4.tcp_fack = 0
net.ipv4.tcp_timestamps = 0
net.core.rmem_default = 262144
net.core.rmem_max = 52428800
net.core.wmem_max = 52428800
net.core.somaxconn = 65535
net.ipv4.tcp_user_cwnd_max = 20
kernel.nmi_watchdog = 0
net.ipv4.igmp_max_memberships = 2000
net.ipv4.igmp_max_msf = 2000


I had a *lot* of troubles with enabling net.ipv4.tcp_tw_recycle. Under 
heavy load connections started breaking up. Setting it back to 0 fixed 
it. So you might want to be careful with that one.


Read 
http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html for 
more info on that.


Greets,

Sander



Re: [PATCH] MINOR: set IP_FREEBIND on IPv6 sockets in transparent mode

2014-03-04 Thread Sander Klein

On 03.03.2014 21:31, Willy Tarreau wrote:

On Mon, Mar 03, 2014 at 09:10:51PM +0100, Lukas Tribus wrote:
Lets set IP_FREEBIND on IPv6 sockets as well, this works since Linux 
3.3

and doesn't require CAP_NET_ADMIN privileges (IPV6_TRANSPARENT does).

This allows unprivileged users to bind to non-local IPv6 addresses, 
which

can be useful when setting up the listening sockets or when connecting
to backend servers with a specific, non-local source IPv6 address (at 
that

point we usually dropped root privileges already).


Patch applied, thank you Lukas!


I will test the patch. Stupid question, but is it really supported from 
3.3 and higher? A quick test with dev22 yesterday seemed to be working 
but I didn't put any traffic through it. It was late so I didn't give it 
enough attention ;-)


Sander



Support IP_FREEBIND

2014-03-03 Thread Sander Klein

Hi,

would it be possible to support IP_FREEBIND with HAProxy-1.5 on linux?

I'm asking because nonlocal_bind only works for IPv4 and it seems linux 
upstream does not want to support nonlocal_bind for IPv6.


A thread about this can be found here: 
http://comments.gmane.org/gmane.comp.web.haproxy/7317


Currently I'm binding IP's to a dummy interface so HAProxy can start, 
but this is starting to become a nightmare.


Greets,

Sander



Re: Support IP_FREEBIND

2014-03-03 Thread Sander Klein

On 03.03.2014 14:45, Sander Klein wrote:

Hi,

would it be possible to support IP_FREEBIND with HAProxy-1.5 on linux?

I'm asking because nonlocal_bind only works for IPv4 and it seems
linux upstream does not want to support nonlocal_bind for IPv6.

A thread about this can be found here:
http://comments.gmane.org/gmane.comp.web.haproxy/7317

Currently I'm binding IP's to a dummy interface so HAProxy can start,
but this is starting to become a nightmare.


Replying to myself... I'm probably looking for the 'transparant' option. 
Looking at the docs it seems to do what I want...


Greets,

Sander



Re: http-keep-alive broken?

2014-01-10 Thread Sander Klein

Heyz,

On 10.01.2014 09:14, Willy Tarreau wrote:

Hi Sander,

On Fri, Jan 10, 2014 at 08:57:18AM +0100, Sander Klein wrote:

Hi,

I'm sorry you haven't heard from me yet. But I didn't have time to 
look

into this issue. Hope to do it this weekend.


Don't rush on it, Baptiste has reported to me a reproducible issue on 
his
lab which seems to match your problem, and which is caused by the way 
the

polling works right now (which is the reason why I want to address this
before the release). I'm currently working on it. The fix is far from 
being

trivial, but necessary.


Do you still want me to bisect? Or should I wait? If you think the 
problem is the same I'll just test the fix :-)


Sander



Re: http-keep-alive broken?

2014-01-09 Thread Sander Klein

Hi,

I'm sorry you haven't heard from me yet. But I didn't have time to look 
into this issue. Hope to do it this weekend.


Greets,

Sander



Re: http-keep-alive broken?

2014-01-06 Thread Sander Klein

On 06.01.2014 15:10, Willy Tarreau wrote:
I would go even further (using git). What I understand here is that the 
issue
was introduced after the epoll optimization and is hidden by this one. 
So I'd
rather start by reverting that patch and then looking up for another 
faulty

patch after those :

  1) create a new branch called test1 starting at the first faulty 
commit :


 git checkout -b test1 2f877304

  2) apply the revert patch first :

 git cherry-pick 3ef5af3d

  3) OK now both the faulty patch and the revert are merged, it makes 
sense

 to confirm that the bug is still not there.

  4) now rebase all further patches on top of these ones : Git will 
re-apply
 all other patches after the ones above. You will thus have a 
working

 version to start from :

 git checkout -b test2 master
 git rebase test1

  5) ensure that branch test2 is wrong by doing a test

  6) bisect the code from test1 which was verified to be good at 3) and 
test2

 which was verified to be bad at 5) :

 git bisect start test2 test1

It will offer you another patch which introduced the regression hidden 
by the

one above.


I will do the bisect as soon as I have time.

On a side note, today I had an issue with another loadbalancer running 
ss-20140101 which showed almost the same behavior as the 'NTLM' bug I 
was having. (hanging connection, or waiting a lng time and the 
giving a corrupt file) This bug only happened with certain downloads 
(jpg's) with http compression enabled. If a browser requested the file 
without the compression header everything was fine.


Downgrading to dev19 also fixed this issue. I don't know if this could 
be related somehow.


Greets,

Sander



RE: http-keep-alive broken?

2014-01-05 Thread Sander Klein

Hey,

On 05.01.2014 17:33, Lukas Tribus wrote:

Hi,


Well, after spending some time compiling testing compiling testing I
finally found that the patch
0103-OPTIM-MEDIUM-epoll-fuse-active-events-into--1.5-dev19.diff done
between 20131115 and 20131116 is causing my problems.

I also found that this problem is much easier to reproduce on Safari
than on Firefox or Chrome.


Ok. Can you try if disabling epoll works around this problem (noepoll 
in
the config or command-line argument -de [1]) to double check it has 
todo

with epoll?


Disabling epoll doesn't fix it... drat... Tested it with ss-20140104. 
Could it be that it's a more subtle bug somewhere else? The (reverted) 
epoll patch and some other patch currently included may make it easier 
to trigger?



The weird thing is that this commit has been reverted in dev21 but I
still have the problem in dev21. So I am a bit confused


No, dev-21 don't has this revert. dev-21 was released December 16th and
the offending commit 2f877304ef (from November 15th) was reverted via
commit 3ef5af3dcc on December 20th.


Sorry, that's actually what I meant.


Just to be on the safe side: could you download a clean and uptodate
snapshot haproxy-ss-20140104 [2], to avoid any missing patches?


Did that, with and without epoll enabled and it both fails.

So in the end, haproxy-ss-20131115 [3] works fine and 
haproxy-ss-20131116

[4] has this problem, correct?

I know this triple checking sucks, but what you are reporting doesn't 
make

sense because, like you said yourself, this was reverted.


No problem, check as much as you want. It sucks if I somehow push you 
guys in the wrong direction.


But, Yes, that is correct. 20131115 works and 2013116 doesn't. I tested 
it a couple of times. The bug is very, very subtle I just found out. 
When using OSX 10.9 with safari 7 it fails with for instance 20140104 
and 20131116 and works with 20131115. But, if I take an older 10.8 
machine with safari 6 it works with all versions.


I am losing my mind here ;-) I'm pretty sure I saw other 
platforms/browsers hang in the same way but that was all under load ~150 
people accessing there servers during office starting hours.


Greets,

Sander



RE: http-keep-alive broken?

2014-01-04 Thread Sander Klein

Heyz,

On 03.01.2014 22:52, Lukas Tribus wrote:

Hi,


The problem I'm having (also tested with ss-20140101 yesterday) 
happens
with http-keep-alive enabled and also when just running in tunnel 
mode.
But, when http-keep-alive is enabled I get the problem with ~98% of 
the

requests and in tunnel mode I get it with ~10% of the requests.
Authentication seems to succeed but the connection just 'hangs'.
Sometimes refreshing 10 times fixes it.


Ah, thats interesting. Then the issue is probably not directly related 
to

keep-alive, it probably just triggered with a much higher likelihood.


Well, after spending some time compiling testing compiling testing I 
finally found that the patch 
0103-OPTIM-MEDIUM-epoll-fuse-active-events-into--1.5-dev19.diff done 
between 20131115 and 20131116 is causing my problems.


I also found that this problem is much easier to reproduce on Safari 
than on Firefox or Chrome.


The weird thing is that this commit has been reverted in dev21 but I 
still have the problem in dev21. So I am a bit confused


Greets,

Sander



RE: http-keep-alive broken?

2014-01-04 Thread Sander Klein

Hey,

On 03.01.2014 22:52, Lukas Tribus wrote:
You said that one of your backends is exchange 2012. What release are 
the
other ntlm-auth backends exactly and is the issue the same on all of 
them?


All backends are windows 2012 with the standard IIS that comes with it. 
I have the problem on all of them. But not always on the same time.


Greets,

Sander



Re: http-keep-alive broken?

2014-01-03 Thread Sander Klein

Hi Baptiste, Lukas,

@Lukas: Sorry I misread your tunnel-mode for tcp-mode. Tunnel-mode works 
(almost) fine as you can read below.


I have been investigating my problem a bit more, and then I remembered 
that I also updated haproxy a week before we started using our new 
Windows 2012 servers.


The problem I'm having (also tested with ss-20140101 yesterday) happens 
with http-keep-alive enabled and also when just running in tunnel mode. 
But, when http-keep-alive is enabled I get the problem with ~98% of the 
requests and in tunnel mode I get it with ~10% of the requests. 
Authentication seems to succeed but the connection just 'hangs'. 
Sometimes refreshing 10 times fixes it.


I have downgraded to dev19 this morning and it seems that the problem 
went away in tunnel mode. (http-keep-alive is of course not available)


While I am not sure yet, it could be something broke during dev19-dev21. 
This may sound a bit silly but connections to our IIS servers 'feel 
faster and more responsive' when using dev19.


I will build a small test environment to see if I can reproduce it and 
capture some traffic. Right now it's just a hunch.


My config is below. When I use http-keep-alive I just uncomment the 
'option http-keep-alive' and comment the 'no option http-server-close'.


###
# Global Settings
###
global
log 127.0.0.1 local0

daemon
userhaproxy
group   haproxy
maxconn 32768
spread-checks   3
stats socket/var/run/haproxy.stat mode 666 level admin

###
# Defaults
###
defaults
log global

mode http

option abortonclose

timeout check   2s
timeout client  10s
timeout connect 10s
timeout http-keep-alive 30s
timeout http-request30s
timeout queue   15s
timeout server  10s
timeout tarpit  120s

###
# Define the admin section
###
listen admin
bind X.X.X.1:8080
bind 2001:x:x:x::1:8080
stats enable
stats uri   /haproxy?stats
stats auth  admin:somepass
stats admin if TRUE
stats refresh 5s

###
# Frontend for services
###
frontend default-fe
bind X.X.X.37:80
bind 2001:X:X:X:6:80
bind X.X.X.37:443 ssl crt /etc/haproxy/ssl/cert.pem crt 
/etc/haproxy/ssl/othercert.pem ciphers RC4:HIGH:!aNULL:!MD5
bind 2001:X:X:X::6:443 ssl crt /etc/haproxy/ssl/cert.pem crt 
/etc/haproxy/ssl/othercert.pem ciphers RC4:HIGH:!aNULL:!MD5


option httplog
option forwardfor

# Add X-Forwarded-* headers
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
http-request set-header X-Forwarded-Ssl off if ! { ssl_fc }

# Define hosts which need to redirect to HTTPS
acl need_ssl hdr(Host) -i blah
acl need_ssl hdr(Host) -i host1
acl need_ssl hdr(host) -i host2
acl need_ssl hdr(host) -i host3

redirect scheme https if need_ssl ! { ssl_fc }

# Define backends and redirect correct hostnames
use_backend mgmt if { hdr(Host) -i blah }
use_backend mgmt if { hdr(Host) -i somehost }
use_backend mgmt if { hdr(Host) -i anotherhost }

use_backend app1 if { hdr(Host) -i host1 }

use_backend app2 if { hdr(Host) -i host2 }
use_backend app3 if { hdr(Host) -i host3 }

http-request redirect location http://some.site if { hdr(Host)  
-i something }


###
# backend_mgmt
###
backend mgmt
fullconn 20

option http-server-close
option httpchk GET / HTTP/1.0

server mgmt-01 192.168.1.7:80 cookie mgmt-01 check inter 2000

###
# backend app1
###
backend app1
fullconn 5

no option http-server-close # ONLY USE IF NTLM IS NEEDED!
#   option http-keep-alive
option httpchk GET /url HTTP/1.0

server app1 192.168.1.30:80 cookie app1 check inter 2000

###
# backend app2
###
backend app2
fullconn 512

no option http-server-close # ONLY USE IF NTLM IS NEEDED!
#   option http-keep-alive
option httpchk GET / HTTP/1.0

server app2 192.168.1.46:443 cookie app2 ssl check inter 2000

###
# backend app3
###
backend app3
fullconn 512

no option http-server-close # ONLY USE IF NTLM IS NEEDED!
#   option http-keep-alive
option httpchk GET / HTTP/1.0

server app3 192.168.1.44:443 cookie app3 ssl check inter 2000






RE: http-keep-alive broken?

2014-01-02 Thread Sander Klein

On 31.12.2013 00:50, Lukas Tribus wrote:

Hi,

Subject: http-keep-alive broken?

Hi,

I'm using haproxy ss-20131229 to reverse proxy some windows iis server
with ntlm-auth enabled (one of them being exchange 2012).

While I understood that using 'option http-keep-alive' would make
ntlm-auth work, it doesn't work for me. Are there still some issue 
with

http-keep-alive and ntlm-auth?


Honestly I would just use the default tunnel mode for this, so I don't
have to think about the NTLM crap when choosing 
keep-alive/load-balancing

parameters.

If you would like to combine NTLM-auth plus keep-alive, I'd propose 
enabling:

 option prefer-last-server

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-option%20prefer-last-server


Wile I do agree that using tcp-mode would make stuff easier, I also need 
to do some redirecting on the host-header. Which is AFAIK not possible 
while in tcp-mode. (I might be wrong)


I tried moving 'option http-keep-alive' to the frontend section but that 
didn't help. I also used 'option prefer-last-server' but that didn't 
help as well and I think it wouldn't make any difference since it only 
redirects to one server.


The docs say that http-keep-alive should be useful if (quote):

  - when the server is non-HTTP compliant and authenticates the 
connection

instead of requests (eg: NTLM authentication)

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-keep-alive

But as far as I have tested it only breaks NTML auth badly. So, either 
I'm doing something wrong, or haproxy is doing something wrong, or the 
docs are wrong about the NTLM part :-)


Greets,

Sander



Re: UDP loadbalancing

2013-12-31 Thread Sander Klein

On , Willy Tarreau wrote:

On Tue, Dec 31, 2013 at 12:44:26AM +0100, Lukas Tribus wrote:

Hi,


 Hi,

 I know haproxy doesn't do UDP loadbalancing, but I figured someone here
 might now A nice tool which can doe this for me. (If haproxy could do it
 it would have been nice though... ;-) )

 I've looked at pen but it doesn't seem to do IPV6.

 LVS can do the trick but I need to reconfigure a bit to much for my
 taste.

 So, are there any other UDP loadbalancers out there?


I suspect the aren't many, because load-balancing UDP via classic 
userspace

software is not very popular.

What application/service/protocol are you trying to load balance?

Any way can do this via ECMP?


In general I see LVS deployed for this. The reason is simple : 
UDP-based
services are generally not proxy-compatible because some IP addresses 
are
implied or transported in the protocol. Thus working in full 
transparent
mode is often the only way to go, and with LVS you can do that in DSR 
mode.

In fact, DNS might be one of the rare exceptions!



I actually do want to balance DNS. Well, actually I want to make it high 
available and since I already have 2 haproxy loadbalancer running I 
figured it would be easy enough to (mis)use them for that.


Greets,

Sander



UDP loadbalancing

2013-12-30 Thread Sander Klein

Hi,

I know haproxy doesn't do UDP loadbalancing, but I figured someone here 
might now A nice tool which can doe this for me. (If haproxy could do it 
it would have been nice though... ;-) )


I've looked at pen but it doesn't seem to do IPV6.

LVS can do the trick but I need to reconfigure a bit to much for my 
taste.


So, are there any other UDP loadbalancers out there?

Regards,

Sander



http-keep-alive broken?

2013-12-30 Thread Sander Klein

Hi,

I'm using haproxy ss-20131229 to reverse proxy some windows iis server 
with ntlm-auth enabled (one of them being exchange 2012).


While I understood that using 'option http-keep-alive' would make 
ntlm-auth work, it doesn't work for me. Are there still some issue with 
http-keep-alive and ntlm-auth?


My config is like:

frontend default-fe
bind x.x.x.x:80
bind 2001:::::1:80
bind x.x.x.x:443 ssl crt /etc/haproxy/ssl/blah.pem crt ciphers 
RC4:HIGH:!aNULL:!MD5
bind 2001:::::1:443 ssl crt 
/etc/haproxy/ssl/blah.pem crt ciphers RC4:HIGH:!aNULL:!MD5


maxconn 512

option httplog
option forwardfor
option splice-auto

# Add X-Forwarded-* headers
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Ssl on if { ssl_fc }
http-request set-header X-Forwarded-Proto http if ! { ssl_fc }
http-request set-header X-Forwarded-Ssl off if ! { ssl_fc }

# Define hosts which need to redirect to HTTPS
acl need_ssl hdr(Host) -i some.host.com

redirect scheme https if need_ssl ! { ssl_fc }

# Define backends and redirect correct hostnames
use_backend foo if { hdr(Host) -i another.host.com }
use_backend bar if { hdr(Host) -i some.host.com }


###
# backend foo
###
backend foo
fullconn 30
option http-keep-alive
option httpchk GET / HTTP/1.0

server foo x.x.x.x:443 cookie foo check inter 2000

###
# backend bar
###
backend bar
fullconn 30

option http-keep-alive
option httpchk GET / HTTP/1.0

server bar y.y.y.y:443 cookie bar ssl check inter 2000

Greets,

Sander



haproxy dev21 high cpu usage

2013-12-17 Thread Sander Klein

Hi,

I've enabled http-keep-alive in my config and now haproxy continuously 
peaks at 100% CPU usage where without http-keep-alive it only uses 
10-13% CPU.


Is this normal/expected behavior?

Greets,

Sander




Re: haproxy dev21 high cpu usage

2013-12-17 Thread Sander Klein

On , Willy Tarreau wrote:

On Tue, Dec 17, 2013 at 10:44:12AM +0100, Guillaume Castagnino wrote:

Le mardi 17 décembre 2013 10:32:30 Sander Klein a écrit :
 Hi,

 I've enabled http-keep-alive in my config and now haproxy continuously
 peaks at 100% CPU usage where without http-keep-alive it only uses
 10-13% CPU.

 Is this normal/expected behavior?

Hi,

Indeed, I can confirm this behaviour when enabling server-side
keepalive.


So it looks like the simple idle connection manager I did yesterday
is still not perfect :-/
I tried to trigger this case but could not manage to make it fail,
so I considered that was OK.

Any information to help reproduce it is welcome, of course!


Well, if you still have my config you can replace all http-server-close 
stuff with http-keep-alive and remove the httpclose options which were 
accidentally in there. It happens as soon as I start haproxy so I don't 
know what triggers it.


Greets,

Sander



  1   2   >