Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-23 Thread Janusz Dziemidowicz
śr., 23 sty 2019 o 11:53 Janusz Dziemidowicz  napisał(a):
> 1.14.2 is current version in Debian testing. Debian seems reluctant to
> use "mainline" nginx versions (1.15.x) so 1.14.x might end in Debian
> 10. I'll try to file Debian bug report later today.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=920297

-- 
Janusz Dziemidowicz



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-23 Thread Janusz Dziemidowicz
śr., 23 sty 2019 o 10:41 Lukas Tribus  napisał(a):
> > I tested all my servers and I've noticed that nginx is broken too. I
> > am running nginx 1.14.2 with OpenSSL 1.1.1a The nginx source contains
> > exactly the same function as haproxy:
> > https://trac.nginx.org/nginx/browser/nginx/src/event/ngx_event_openssl.c?rev=ebf8c9686b8ce7428f975d8a567935ea3722da70#L850
> >
> > However, it seems that it might have been fixed in 1.15.2 by this commit:
> > https://trac.nginx.org/nginx/changeset/e3ba4026c02d2c1810fd6f2cecf499fc39dde5ee/nginx/src/event/ngx_event_openssl.c
>
> Thanks for this. It's actually nginx 1.15.4 (September 2018) where
> this commit is present.

Yes, typed too fast ;)

> Are nginx folks aware of the problem? It would probably be wise for
> them to backport the fix to their 1.14 tree ...

1.14.2 is current version in Debian testing. Debian seems reluctant to
use "mainline" nginx versions (1.15.x) so 1.14.x might end in Debian
10. I'll try to file Debian bug report later today.

-- 
Janusz Dziemidowicz



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Janusz Dziemidowicz
wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>
> Hi.
>
> I have now build haproxy with boringssl and it looks quite good.
>
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?

openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
Just type 'K' and press enter. If the server is broken then connection
will be aborted.

www.github.com:443, currently broken:
read R BLOCK
K
KEYUPDATE
read R BLOCK
read:errno=0

mail.google.com:443, working:
read R BLOCK
K
KEYUPDATE


-- 
Janusz Dziemidowicz



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Janusz Dziemidowicz
pon., 21 sty 2019 o 00:10 Adam Langley  napisał(a):
> No idea, I'm afraid. If you have a server to test, it looks like one
> can use OpenSSL 1.1.1's `openssl s_client` tool to send a KeyUpdate
> message by writing "K" on a line by itself.

I tested all my servers and I've noticed that nginx is broken too. I
am running nginx 1.14.2 with OpenSSL 1.1.1a The nginx source contains
exactly the same function as haproxy:
https://trac.nginx.org/nginx/browser/nginx/src/event/ngx_event_openssl.c?rev=ebf8c9686b8ce7428f975d8a567935ea3722da70#L850

However, it seems that it might have been fixed in 1.15.2 by this commit:
https://trac.nginx.org/nginx/changeset/e3ba4026c02d2c1810fd6f2cecf499fc39dde5ee/nginx/src/event/ngx_event_openssl.c

It might also be a better approach for haproxy to just use
SSL_OP_NO_RENEGOTIATION if possible. Older OpenSSL versions do no have
it, but they also don't support TLS 1.3

And just for reference, I've found Chrome bug with this problem (as I
am interested when this will get enabled to keep all my systems
updated) https://bugs.chromium.org/p/chromium/issues/detail?id=923685

-- 
Janusz Dziemidowicz



Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-08 Thread Janusz Dziemidowicz
pt., 4 sty 2019 o 11:59 Olivier Houchard  napisał(a):
> I understand the concern.
> I checked and both nghttp2 and nginx disable the replay protection. The idea
> is you're supposed to allow early data only on harmless requests anyway, ie
> ones that could be replayed with no consequence.

Sorry for the late reply, I was pondering the problem ;) I'm pretty ok
with this patch, especially since others seem to do the same. And my
use case is DNS-over-TLS, which has no problems with replays anyway ;)

However, I believe in general this is a bit more complicated. RFC 8446
described this in detail in section 8:
https://tools.ietf.org/html/rfc8446#section-8
My understanding is that RFC highly recommends anti-replay with 0-RTT.
It seems that s_server implements single use tickets, which is exactly
what is in section 8.1. The above patch disables anti-replay
completely in haproxy, which might warrant some updates to
documentation about allow-0rtt option?

-- 
Janusz Dziemidowicz



Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-04 Thread Janusz Dziemidowicz
czw., 3 sty 2019 o 17:52 Olivier Houchard  napisał(a):
> Ah I think I figured it out.
> OpenSSL added anti-replay protection when using early data, and it messes up
> with the session handling.
> With the updated attached patch, I get early data to work again. Is it better
> for you ?

Now it works.
However, I am a bit concerned about disabling something that sounds
like an important safeguard.
Reading this 
https://www.openssl.org/docs/man1.1.1/man3/SSL_SESSION_get_max_early_data.html#REPLAY-PROTECTION
suggests that it is really not a wise thing to do.

And again, s_server works differently. It does not use
SSL_OP_NO_ANTI_REPLAY but the resumption, with early data, works,
once. Then you get new session that you can resume again if you wish,
but also once. You cannot resume the same session twice. With your
patch I can resume single session as many times as I wish. Coupled
with early data this is exactly something that TLS 1.3 RFC warns
against. This probably is due to haproxy using external session
management.

I'll try to dig more into this on weekend, now that I know where to look.

-- 
Janusz Dziemidowicz



Re: State of 0-RTT TLS resumption with OpenSSL

2019-01-03 Thread Janusz Dziemidowicz
śr., 2 sty 2019 o 19:04 Olivier Houchard  napisał(a):
> You're right indeed. 0RTT was added with a development version of OpenSSL 
> 1.1.1,
> which had a default value for max early data of 16384, but it was changed to
> 0 in the meanwhile.
> Does the attached patch work for you ?

This indeed results in following when using s_client:
Max Early Data: 16385

However, I believe it still does not work. I was trying again to test
it with s_client.

Without allow-0rtt option I can resume TLS 1.3 session without problem:
openssl s_client -connect host:port -sess_out sessfile
openssl s_client -connect host:port -sess_in sessfile
This results with:
Reused, TLSv1.3, Cipher is TLS_CHACHA20_POLY1305_SHA256

As soon as I add allow-0rtt (and your patch) above s_client results
always with a new session:
New, TLSv1.3, Cipher is TLS_CHACHA20_POLY1305_SHA256
No matter what I do I was not able to resume any session with allow-0rtt active.

Just to rule out that I am using s_client in a wrong way I've made the
same test against s_server. I was able to successfully resume session
and even send early data that was accepted. So I believe that there is
still something wrong in haproxy with TLS session handling.

-- 
Janusz Dziemidowicz



State of 0-RTT TLS resumption with OpenSSL

2018-12-30 Thread Janusz Dziemidowicz
Hi,
I've been trying to get 0-RTT resumption working with haproxy 1.8.16
and OpenSSL 1.1.1a.
No matter what I put in configuration file, testing with openssl
s_client always results in:
Max Early Data: 0

OK, let's look at ssl_sock.c
The only thing that seems to try to enable 0-RTT is this:
#ifdef OPENSSL_IS_BORINGSSL
if (allow_early)
SSL_set_early_data_enabled(ssl, 1);
#else
if (!allow_early)
SSL_set_max_early_data(ssl, 0);
#endif

But I fail to see how this is supposed to work. OpenSSL has 0-RTT
disabled by default. To enable this one must call
SSL_set_max_early_data with the amount of bytes it is willing to read.
The above simply does... nothing.

Is it supposed to work at all or do I miss something? ;)

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-08-02 Thread Janusz Dziemidowicz
pt., 27 lip 2018 o 10:35 Willy Tarreau  napisał(a):
>
> On Fri, Jul 27, 2018 at 10:28:36AM +0200, Milan Petruzelka wrote:
> > after 2 days I also have no blocked connections. There's no need to wait
> > until Monday as I suggested yesterday.
>
> Perfect, many thanks Milan.

Sorry for being late, but 1.8.13 fixes the CLOSE_WAIT problem for me too :)
Now I have to dig into protocol errors I get when enabling h2, but
this will probably happen next week. I will create a new thread for
this.

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-20 Thread Janusz Dziemidowicz
czw., 19 lip 2018 o 11:14 Willy Tarreau  napisał(a):
>
> Hi Milan, Janusz,
>
> I suspect I managed to reliably trigger the issue you were facing and
> found a good explanation for it. It is caused by unprocessed bytes at
> the end of the H1 stream. I manage to reproduce it if I chain two layers
> of haproxy with the last one sending stats. It does not happen with a
> single layer, so the scheduling matters a lot (I think it's important
> that the final CRLF is present in the same response packet as the final
> chunk).
>
> Could you please try the attached patch to see if you're on the same
> issue ?

I've been running 1.8.12 with this patch for an hour. It seems that it
helped somewhat, but not entirely. After an hour I still see about 10
CLOSE_WAIT sockets. The number seems to grow a lot slower, but still
grows (and some of them have been sitting in CLOSE_WAIT for over 30
minutes).
Since I'm also affected by SPDY_PROTOCOL_ERROR I mentioned earlier I
must disable h2 now.

-- 
Janusz Dziemidowicz



Re: SSL: double free on reload

2018-07-15 Thread Janusz Dziemidowicz
pon., 16 lip 2018 o 08:02 Willy Tarreau  napisał(a):
> This one looks a bit strange. I looked at it a little bit and it corresponds
> to the line "free(bind_conf->keys_ref->tlskeys);". Unfortunately, there is no
> other line in the code appearing to perfom a free on this element, and when
> passing through this code the key_ref is destroyed and properly nulled. I
> checked if it was possible for this element not to be allocated and I don't
> see how that could happen either. Thus I'm seeing only three possibilities :
>
>   - this element was duplicated and appears at multiple places (multiple list
> elements) leading to a real double free
>
>   - there is a memory corruption somewhere possibly resulting in this element
> being corrupted and not in fact victim of a double free
>
>   - I can't read code and there is another free that I failed to detect.
>
> Are you able to trigger this on a trivial config ? Maybe it only happens
> when certain features you have in your config are enabled ?

I've reported this some time ago :)
https://www.mail-archive.com/haproxy@formilux.org/msg30093.html

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-15 Thread Janusz Dziemidowicz
2018-06-15 11:21 GMT+02:00 Willy Tarreau :
>> I've tried with all three patches, still no luck. I had to revert
>> native h2 shortly because I've started getting ERR_SPDY_PROTOCOL_ERROR
>> in Chrome. The error was always on POST request.
>
> Too bad, have to dig again then :-/.
> Thank you Janusz!

That's a bit weird, but I've reverted back to clean 1.8.9 and I still
get ERR_SPDY_PROTOCOL_ERROR, so this seems like a unrelated problem.
Chrome net-internals shows only:

t= 10312 [st=  4042]  HTTP2_SESSION_RECV_RST_STREAM
  --> error_code = "5 (STREAM_CLOSED)"
  --> stream_id = 129

However I'm pretty sure I was doing exactly the same yesterday and had
no such problem.

Anyway, I'm reverting back to clean 1.8.9 and h2 handled by nghttpx.
I'd prefer not to do any more tests before Monday ;)

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-15 Thread Janusz Dziemidowicz
2018-06-14 19:49 GMT+02:00 Willy Tarreau :
> On Thu, Jun 14, 2018 at 07:22:34PM +0200, Janusz Dziemidowicz wrote:
>> 2018-06-14 18:56 GMT+02:00 Willy Tarreau :
>>
>> > If you'd like to run a test, I'm attaching the patch.
>>
>> Sure, but you forgot to attach it :)
>
> Ah, that's because I'm stupid :-)
>
> Here it comes this time.

I've tried with all three patches, still no luck. I had to revert
native h2 shortly because I've started getting ERR_SPDY_PROTOCOL_ERROR
in Chrome. The error was always on POST request.

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-14 Thread Janusz Dziemidowicz
2018-06-14 18:56 GMT+02:00 Willy Tarreau :

> If you'd like to run a test, I'm attaching the patch.

Sure, but you forgot to attach it :)

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-14 Thread Janusz Dziemidowicz
2018-06-14 11:46 GMT+02:00 Willy Tarreau :
>> Will try.

I've tried the seconds path, together with the first one, no change at all.

However, I was able to catch it on my laptop finally. I still can't
easily reproduce this, but at least that's something. Little
background, my company makes online games, the one I am testing with
is a web browser flash game. As it starts, it makes various API calls
and loads game resources, graphics/music, etc. So I've disabled
browser cache and tried closing browser tab with the game as it was
loading. After a couple of tries I've achieved following state:
tcp61190  0 SERVER_IP:443 MY_IP:54514 ESTABLISHED 538049/haproxy

This is with browser tab already closed. Browser (latest Chrome)
probably keeps the connection alive, but haproxy should close it after
a while. Well, that didn't happen, after good 30 minutes the
connection is still ESTABLISHED. My timeouts are at the beginning of
this thread, my understanding is that this connection should be killed
after "timeout client" which is 60s.
After that I've closed the browser completely. Connection moved to the
CLOSE_WAIT state in question:
tcp61191  0 SERVER_IP:443 MY_IP:54514 CLOSE_WAIT  538049/haproxy

haproxy logs (I have dontlognormal enabled): https://pastebin.com/sUsa6jNQ

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-14 Thread Janusz Dziemidowicz
2018-06-14 11:14 GMT+02:00 Willy Tarreau :
> Yep it's really not easy and probably once we find it I'll be ashamed
> saying "I thought this code was not merged"... By the way yesterday I
> found another suspect but I'm undecided on it ; the current architecture
> of the H2 mux complicates the code analysis. If you want to give it a
> try on top of previous one, I'd appreciate it, even if it doesn't change
> anything. Please find it attached.

Will try.

I've found one more clue. I've added various graphs to my monitoring.
Also I've been segregating various traffic kinds into different
haproxy backends. Yesterday test shows this:
https://pasteboard.co/HpPK2Ml6.png

This backend (sns) is used exclusively for static files that are
"large" (from 10KB up to over a megabyte) compared to my usual traffic
(various API calls mostly). Those 5xx errors are not from the backend
servers, "show stat":
sns,kr-8,0,0,5,108,,186655,191829829,19744924356,,0,,0,0,0,0,UP,100,1,0,0,0,15377,0,,1,4,1,,186655,,2,7,,147,L7OK,200,1,0,184295,550,0,0,0,8895,0,0,OK,,0,12,4,1474826Layer7
check passed,,2,5,610.7.1.8:81,,http
sns,kr-10,0,0,2,105,,186654,191649821,19977086644,,0,,0,0,0,0,UP,100,1,0,0,0,15377,0,,1,4,2,,186654,,2,7,,148,L7OK,200,0,0,184275,551,0,0,0,8823,0,0,OK,,0,21,4,1473385Layer7
check passed,,2,5,610.7.1.10:81,,http
sns,BACKEND,0,0,8,213,6554,383553,391967657,39722011000,0,0,,0,0,0,0,UP,200,2,0,,0,15377,0,,1,4,0,,373309,,1,14,,3320,368563,1101,0,1873,12008383545,27962,0,0,0,0,0,0,,,0,18,5,1763433,,http,roundrobin,,,

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-13 Thread Janusz Dziemidowicz
2018-06-13 19:14 GMT+02:00 Willy Tarreau :
> On Wed, Jun 13, 2018 at 07:06:58PM +0200, Janusz Dziemidowicz wrote:
>> 2018-06-13 14:42 GMT+02:00 Willy Tarreau :
>> > Hi Milan, hi Janusz,
>> >
>> > thanks to your respective traces, I may have come up with a possible
>> > scenario explaining the CLOSE_WAIT you're facing. Could you please
>> > try the attached patch ?
>>
>> Unfortunately there is no change for me. CLOSE_WAIT sockets still
>> accumulate if I switch native h2 on. Milan should probably double
>> check this though.
>> https://pasteboard.co/HpJj72H.png
>
> :-(
>
> With still the same perfectly straight line really making me think of either
> a periodic activity which I'm unable to guess nor model, or something related
> to our timeouts.

It is not exactly straight. While it looks like this for short test,
when I did this earlier, for much longer period of time, it was
slowing down during night, when I have less traffic.

>> I'll try move some low traffic site to a separate instance tomorrow,
>> maybe I'll be able to capture some traffic too.
>
> Unfortunately with H2 that will not help much, there's the TLS layer
> under it that makes it a real pain. TLS is designed to avoid observability
> and it does it well :-/
>
> I've suspected a received shutdown at the TLS layer, which I was not
> able to model at all. Tools are missing at this point. I even tried
> to pass the traffic through haproxy in TCP mode to help but I couldn't
> reproduce the problem.

When I disable native h2 in haproxy I switch back to tcp mode going
though nghttpx. The traffic is obviously the same, yet there is no
problem.

> It could possibly help if you can look for the affected client's IP:port
> in your logs to see if they are perfectly normal or if you notice they
> have something in common (eg: always the exact same requests, or they
> never made a request from the affected connections, etc).

I'm aware of the problems :) However, if I can get some traffic dumps,
knowing my application I might be able to reproduce this, which would
be a huge win. I've already tried some experiments with various tools
with no luck unfortunately.

> I won't merge the current patch for now. At minima it's incomplete,
> and there is always a risk that it breaks something else in such a
> difficult to detect way.

Sure, no problem :)

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-06-13 Thread Janusz Dziemidowicz
2018-06-13 14:42 GMT+02:00 Willy Tarreau :
> Hi Milan, hi Janusz,
>
> thanks to your respective traces, I may have come up with a possible
> scenario explaining the CLOSE_WAIT you're facing. Could you please
> try the attached patch ?

Unfortunately there is no change for me. CLOSE_WAIT sockets still
accumulate if I switch native h2 on. Milan should probably double
check this though.
https://pasteboard.co/HpJj72H.png

I'll try move some low traffic site to a separate instance tomorrow,
maybe I'll be able to capture some traffic too.

-- 
Janusz Dziemidowicz



Re: Connections stuck in CLOSE_WAIT state with h2

2018-05-24 Thread Janusz Dziemidowicz
2018-05-24 22:26 GMT+02:00 Willy Tarreau :
>> This kinda seems like the socket was closed on the writing side, but
>> the client has already sent something and everything is stuck. I was
>> not able to reproduce the problem by myself. Any ideas how to debug
>> this further?
>
> For now not much comes to my mind. I'd be interested in seeing the
> output of "show fd" issued on the stats socket of such a process (it
> can be large, be careful).

Will do tomorrow. Forgot to mention, apart from this issue everything
seems to work fine. No user reports any problem. Obviously it consumes
more and more memory. So I can enable h2 for an hour or two to avoid
problems.

>> haproxy -vv (Debian package rebuilt on stretch with USE_TFO):
>
> Interesting, and I'm seeing "tfo" on your bind line. We don't have it
> on haproxy.org. Could you please re-test without it, just in case ?
> Maybe you're receiving SYN+data+FIN that are not properly handled.

I've spend some time tweaking several settings already. I believe I've
checked without tfo and there was no difference. Will repeat that
tomorrow to be sure.

>> HA-Proxy version 1.8.9-1~tsg9+1 2018/05/21
>
> Is 1.8.9 the first version you tested or is it the first one you saw
> the issue on, or did you notice the issue on another 1.8 version ? If
> it turned out to be a regression it could be easier to spot in fact.
>
> Your config is very clean and shows nothing suspicious at all. Thus at
> first knowing if tfo changes anything would be a good start.

I've seen this issue also in 1.8.8, which was the first version I've
used after 1.7.x. My actual config is a bit more complicated (multiple
processes per socket, some stats, etc.), but I've been stripping it
down and down and what I've attached is still producing this issue for
me.

Anyway, I'll do another round of experiments (without tfo) tomorrow.

-- 
Janusz Dziemidowicz



Connections stuck in CLOSE_WAIT state with h2

2018-05-24 Thread Janusz Dziemidowicz
quest 30s

  log global
  retries 3
  backlog 16384
  maxconn 65536

  mode http
  errorfile 403 /etc/haproxy/403.html

backend php
  option httpchk GET /ping.php
  balance roundrobin
  cookie SOME_COOKIE insert indirect httponly
  server hostname-1  IP:81  check inter 5000 rise 2 fall 5 weight 100  cookie 1
# more servers

frontend http
  bind IP:80 transparent
  bind IP:443 tfo transparent ssl alpn h2,http/1.1 curves X25519:P-256
tls-ticket-keys FILE crt FILE

  http-request set-header X-Forwarded-For %ci unless LOCALHOST
  http-request set-header X-Forwarded-Proto https unless { dst_port 80 }
  http-request set-header X-Forwarded-Proto http if { dst_port 80 }

  default_backend php

-- 
Janusz Dziemidowicz



Process crash on reload with TLS tickets

2018-05-23 Thread Janusz Dziemidowicz
Hi,
this seems harmless, but haproxy processes crash on reload when using
TLS tickets with multiple sockets per port.

Following configuration reproduces the problem:
global
  nbproc 2
  user haproxy
  group haproxy
  daemon

defaults
  timeout connect 5000
  timeout client  5
  timeout server  5

frontend test
  bind 127.0.0.1:8443 process 1 ssl crt file tls-ticket-keys file
  bind 127.0.0.1:8443 process 2 ssl crt file tls-ticket-keys file

Every reload results with the following warning in logs:
[WARNING] 142/134019 (23389) : Former worker 23404 exited with code 134

gdb shows following:
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7fc87c8cc42a in __GI_abort () at abort.c:89
#2  0x7fc87c908c00 in __libc_message (do_abort=do_abort@entry=2,
fmt=fmt@entry=0x7fc87c9fdd98 "*** Error in `%s': %s: 0x%s ***\n") at
../sysdeps/posix/libc_fatal.c:175
#3  0x7fc87c90efc6 in malloc_printerr (action=3,
str=0x7fc87c9fde10 "double free or corruption (!prev)", ptr=, ar_ptr=) at malloc.c:5049
#4  0x7fc87c90f80e in _int_free (av=0x7fc87cc31b00 ,
p=0x55f389a1a4a0, have_lock=0) at malloc.c:3905
#5  0x55f387b9aa16 in ssl_sock_destroy_bind_conf
(bind_conf=0x55f389a1c400) at src/ssl_sock.c:4818
#6  0x55f387c25280 in deinit () at src/haproxy.c:2240
#7  0x55f387b8846e in main (argc=,
argv=0x7ffed2261cc8) at src/haproxy.c:3070

src/ssl_sock.c:4818 contains:
  free(bind_conf->keys_ref->tlskeys);

haproxy version (1.8.9 recompiled on Debian stretch with USE_TFO):
$ /usr/sbin/haproxy -vv
HA-Proxy version 1.8.9-1~tsg9+1 2018/05/21
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fdebug-prefix-map=/root/haproxy-1.8.9=.
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
-D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0f  25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0f  25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace


-- 
Janusz Dziemidowicz



Re: [PATCH] Clear OpenSSL error stack after trying to parse OCSP file

2017-03-10 Thread Janusz Dziemidowicz
2017-03-08 17:39 GMT+01:00 Olivier Doucet :
> With your patch, I can see that you adressed an issue I had : I cannot
send
> an ocsp refresh to a certificate that do not hold an ocsp signature. It
> seems you succeed in that by providing at least an empty file. That's a
> start.
> Can it be possible to modify current source code, to not provide any ocsp
> file on startup but still accept OCSP refresh through haproxy socket ?

Probably, I've been also annoyed with this behavior for some time (empty
.ocsp files do work, but generate a lot of warnings for me). Maybe I'll
find some time to look into this later.
Regardless, this patch is rather safe and should probably be applied
anyway, if there are no concerns about it (and probably backported to 1.7).

-- 
Janusz Dziemidowicz


[PATCH] Clear OpenSSL error stack after trying to parse OCSP file

2017-03-08 Thread Janusz Dziemidowicz
Invalid OCSP file (for example empty one that can be used to enable
OCSP response to be set dynamically later) causes errors that are
placed on OpenSSL error stack. Those errors are not cleared so
anything that checks this stack later will fail.

Following configuration:
  bind :443 ssl crt crt1.pem crt crt2.pem

With following files:
  crt1.pem
  crt1.pem.ocsp - empty one
  crt2.pem.rsa
  crt2.pem.ecdsa

Will fail to load.
---
 src/ssl_sock.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 91a15af7..f947c996 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -478,6 +478,8 @@ static int ssl_sock_load_ocsp_response(struct chunk 
*ocsp_response, struct certi
 
ret = 0;
 out:
+   ERR_clear_error();
+
if (bs)
 OCSP_BASICRESP_free(bs);
 
-- 
2.11.0




Re: SCT TLS extensions with 2 certificates

2017-01-09 Thread Janusz Dziemidowicz
2017-01-09 14:01 GMT+01:00 Pier Carlo Chiodi :
> I'm having an issue while trying to serve SCT TLS extensions in a 2
> certificates scenario.

This might be a problem with OpenSSL 1.1.0 and SNI. There is a very
similar issue reported for nginx CT module
https://github.com/grahamedgecombe/nginx-ct/issues/13
And OpenSSL bug report: https://github.com/openssl/openssl/issues/2180

-- 
Janusz Dziemidowicz



Re: Problem with http-request set-src and send-proxy on 1.6

2016-11-18 Thread Janusz Dziemidowicz
2016-11-18 14:27 GMT+01:00 Janusz Dziemidowicz :
> listen default
>   bind :
>   http-request set-src req.hdr_ip(X-Forwarded-For)
>   server localhost 127.0.0.1:80 send-proxy

Sorry, there are obviously two binds there:
  bind :
  bind :::

-- 
Janusz Dziemidowicz



Problem with http-request set-src and send-proxy on 1.6

2016-11-18 Thread Janusz Dziemidowicz
Hello,
I think I've found a problem how http-request set-src interacts with
PROXY protocol on backend servers.

Very simple setup:
listen default
  bind :
  http-request set-src req.hdr_ip(X-Forwarded-For)
  server localhost 127.0.0.1:80 send-proxy

wget -4 --header='X-Forwarded-For: 192.0.2.1' -O /dev/null -S
http://localhost:
gives
PROXY TCP4 192.0.2.1 127.0.0.1 0 

wget -6 --header='X-Forwarded-For: 2001:db8::1' -O /dev/null -S
http://localhost:
gives
PROXY TCP6 2001:db8::1 ::1 0 

but both:
wget -4 --header='X-Forwarded-For: 2001:db8::1' -O /dev/null -S
http://localhost:
wget -6 --header='X-Forwarded-For: 192.0.2.1' -O /dev/null -S
http://localhost:
give
PROXY UNKNOWN

Log files report correct IPs in all cases. The same problem exists for
frontend listening on UNIX sockets, just in that case I always get
PROXY UNKNOWN.

haproxy -vv:
HA-Proxy version 1.6.9-2 2016/09/28
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2
-fdebug-prefix-map=/build/haproxy-XsW4aZ/haproxy-1.6.9=. -fPIE
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
-D_FORTIFY_SOURCE=2
  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
Running on OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND
Built with network namespace support

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


-- 
Janusz Dziemidowicz



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-19 Thread Janusz Dziemidowicz
2016-04-19 18:13 GMT+02:00 Emeric Brun :
> I don't know how the curve negotiation works, but i have some questions.
>
> What is the behavior if the SSL_CTX_set_ecdh_auto is used on server side and 
> if
> the client doesn't support the neg.
>
> In other words:
>
> Is it useful to set both SSL_CTX_set_ecdh_auto and SSL_CTX_set_tmp_ecdh (with 
> the first one of the list for instance), to ensure
> the first wanted curve is used if client doesn't support the neg.

Not really. In TLS protocol, there is only one way for a client t
select elliptic curve, that is using "supported eliptic curves"
extensions. The confusing part is OpenSSL API. The "old" API, aka
SSL_CTX_set_tmp_ecdh(), allowed only curve to be selected by the
server. If it was not present on the extension sent by client, then
bummer, connection error. The new API "SSL_CTX_set_ecdh_auto" supports
real negotiation, as it was always in the design of TLS. Client sends
its curves list in the extension, server tries to find a matching
curve from a list it supports.

There are no clients "not supporting the neg". If the client supports
elliptic curves at all it must send the list in the extension.

-- 
Janusz Dziemidowicz



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-18 Thread Janusz Dziemidowicz
2016-04-15 16:50 GMT+02:00 David Martin :
> I have tested the current patch with the HAProxy default, a list of curves,
> a single curve and also an incorrect curve.  All seem to behave correctly.
> The conditional should only skip calling ecdh_auto() if curves_list()
> returns 0 in which case HAProxy exits anyway.
>
> Maybe I'm missing something obvious, this has been a learning experience for
> me.

You are correct. I guess I shouldn't have been looking at patches
during a break at a day work;)
Seems ok for me now. Apart from the missing documentation changes;)

-- 
Janusz Dziemidowicz



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-15 Thread Janusz Dziemidowicz
2016-04-15 11:16 GMT+02:00 Pavlos Parissis :
> But on server side you need openssl 1.1.0[1] which is not ready yet and
> I think it requires changes on haproxy. Nginx has already some level of
> support[2] for openssl 1.1.0.

Sure, I didn't mean that it will work right now, but someday,
somewhere in the future;)

-- 
Janusz Dziemidowicz



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-15 Thread Janusz Dziemidowicz
2016-04-14 17:39 GMT+02:00 David Martin :
> Here's a revised patch, it throws a fatal config error if
> SSL_CTX_set1_curves_list() fails.  The default echde option is used so
> current configurations should not be impacted.
>
> Sorry Janusz, forgot the list on my reply.

I believe that now it is wrong as SSL_CTX_set_ecdh_auto works
differently than this code implies.
>From what I was able to tell from OpenSSL code (always a pleasure
looking at) it works as follows:
- SSL_CTX_set_ecdh_auto turns on negotiation of curves, without this
no curves will be negotiated (and only one configured curve will be
used, "the old way")
- the list of curves that are considered during negotiation contain
all of the OpenSSL supported curves
- unless you also call SSL_CTX_set1_curves_list() and narrow it down
to the list you prefer

Right now you patch either calls SSL_CTX_set_ecdh_auto or
SSL_CTX_set1_curves_list, but not both. Unless I'm mistaken, this
kinda is not how it is supposed to be used.
Have you tested behavior of the server with any command line client?

I believe this should be something like:
#if new OpenSSL
   SSL_CTX_set_ecdh_auto(... 1)
   SSL_CTX_set1_curves_list() with user supplied ecdhe or
ECDHE_DEFAULT_CURVE by default
#elif ...
   SSL_CTX_set_tmp_ecdh() with user supplied ecdhe or
ECDHE_DEFAULT_CURVE by default
#endif

This way haproxy behaves exactly the same with default configuration
and any version of OpenSSL. User can configure multiple curves if
there is sufficiently new OpenSSL.

Changes to the documentation would also be nice in the patch :)

-- 
Janusz Dziemidowicz



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-15 Thread Janusz Dziemidowicz
2016-04-15 6:55 GMT+02:00 Willy Tarreau :
>> Switching ECDHE curves can have performance impact, for example result
>> of openssl speed on my laptop:
>>  256 bit ecdh (nistp256)   0.0003s   2935.3
>>  384 bit ecdh (nistp384)   0.0027s364.9
>>  521 bit ecdh (nistp521)   0.0016s623.2
>> The difference is so high for nistp256 because OpenSSL has heavily
>> optimized implementation
>> (https://www.imperialviolet.org/2010/12/04/ecc.html).
>
> Wow, and despite this you want to let the client force the server to
> switch to 384 ? Looks like a hue DoS to me.

Just to be sure, I'm not the original author, I've just made some comments ;)
Some people tend to use the strongest possible crypto, just for the
sake of it. Usually on low traffic sites :)
Anyway, Chrome 50 just pushes support for x25519. I believe this will
also have a very fast implementation, so ability to configure more
curves will probably be handy in near future.

-- 
Janusz Dziemidowicz



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-14 Thread Janusz Dziemidowicz
2016-04-14 12:05 GMT+02:00 Willy Tarreau :
> Hi David,
>
> On Wed, Apr 13, 2016 at 03:19:45PM -0500, David Martin wrote:
>> This is my first attempt at a patch, I'd love to get some feedback on this.
>>
>> Adds support for SSL_CTX_set_ecdh_auto which is available in OpenSSL 1.0.2.
>
>> From 05bee3e95e5969294998fb9e2794ef65ce5a6c1f Mon Sep 17 00:00:00 2001
>> From: David Martin 
>> Date: Wed, 13 Apr 2016 15:09:35 -0500
>> Subject: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection
>>
>> Use SSL_CTX_set_ecdh_auto if the OpenSSL version supports it, this
>> allows the server to negotiate ECDH curves much like it does ciphers.
>> Prefered curves can be specified using the existing ecdhe bind options
>> (ecdhe secp384r1:prime256v1)
>
> Could it have a performance impact ? I mean, may this allow a client to
> force the server to use curves that imply harder computations for example ?
> I'm asking because some people got seriously hit by the move from dhparm
> 1024 to 2048, so if this can come with a performance impact we possibly want
> to let the user configure it.

Switching ECDHE curves can have performance impact, for example result
of openssl speed on my laptop:
 256 bit ecdh (nistp256)   0.0003s   2935.3
 384 bit ecdh (nistp384)   0.0027s364.9
 521 bit ecdh (nistp521)   0.0016s623.2
The difference is so high for nistp256 because OpenSSL has heavily
optimized implementation
(https://www.imperialviolet.org/2010/12/04/ecc.html).

Apart from calling SSL_CTX_set_ecdh_auto() this patch also takes into
account user supplied curve list, so users can customize this as
needed (currently haproxy only allows to select one curve, which is a
limitation of older OpenSSL versions).

However, this patch reuses bind option 'ecdhe'. Currently it is
documented to accept only one curve. I believe it should be at least
updated to state that multiple curves can be used with sufficiently
new OpenSSL.
Also, I'm not sure what will happen when SSL_CTX_set1_curves_list() is
called with NULL (no ecdhe bind option). Even if it is accepted by
OpenSSL it will silently change haproxy default, before this patch it
was only prime256v1 (as defined in ECDHE_DEFAULT_CURVE), afterward it
will default to all curves supported by OpenSSL. Probably the best
would be to keep current default, so it all works consistently in
default configuration, regardless of version of haproxy and OpenSSL.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-04-12 Thread Janusz Dziemidowicz
2016-04-11 17:23 GMT+02:00 Willy Tarreau :
> Janusz and Nenad, please apply the following patch to your 1.5 tree.
> It works for me and does what the code is supposed to do (ie: subtract
> outgoing data from the reserve as done in channel_full() when deciding
> to re-enable polling).

Patch applied and one of the remaining servers was upgraded. So far it
seems to work ok for me too. Thank you very much.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-04-11 Thread Janusz Dziemidowicz
2016-04-09 2:15 GMT+02:00 Willy Tarreau :
> On Fri, Apr 08, 2016 at 03:15:22PM +0200, Janusz Dziemidowicz wrote:
>> 2016-04-07 17:47 GMT+02:00 Willy Tarreau :
>> > If someone who can reliably reproduce the issue could check whether 1.6 has
>> > the same issue, it would help me cut the problem in half. That obviously
>> > excludes all those running sensitive production of course.
>>
>> I can try to test 1.6 next week and see what happens.
>
> Wow that could be great if you could, thanks Janusz!

I've upgraded one of my servers to haproxy 1.6.4. So far no problems
and CPU usage seems ok.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-04-08 Thread Janusz Dziemidowicz
2016-04-07 17:47 GMT+02:00 Willy Tarreau :
> If someone who can reliably reproduce the issue could check whether 1.6 has
> the same issue, it would help me cut the problem in half. That obviously
> excludes all those running sensitive production of course.

I can try to test 1.6 next week and see what happens.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-04-04 Thread Janusz Dziemidowicz
2016-03-31 9:46 GMT+02:00 Janusz Dziemidowicz :
> About the CPU problem. Reverting 7610073a indeed fixes my problem. If
> anyone has any idea what is the problem with this commit I am willing
> to test patches:)
> Some more details about my setup. All servers have moderate traffic
> (200-500mbit/s in peak). I do both plain HTTP and HTTPS + some small
> traffic in TCP mode (also both with and without TLS). I also make an
> extensive use of unix sockets for HTTP/2 support (decrypted HTTP/2
> traffic is routed via unix socket to nghttpx and then arrives back on
> another socket as HTTP/1.1).

Back to the original problem as the TLS ticket discussion has ended.
Anyone has any idea why 7610073a seems to increase CPU usage? I've
tried looking into this, but unfortunately I am not that familiar with
haproxy internals.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-03-31 Thread Janusz Dziemidowicz
2016-03-31 12:21 GMT+02:00 Lukas Tribus :
> Pretty sure, I killed one process after another in between the tests.
>
> I also compiled with USE_PRIVATE_CACHE=1 to disable inter process
> session ID caching, and I can see that session caching definitely
> fails (which is expected if hitting different proccesses with private cache)
> while tls ticketing works fine:
>
> https://gist.github.com/lukastribus/b1815c392512b42167f7578e085a422f
>
>
> Nenad, can you confirm or clarify expected tls ticketing behavior
> in nbproc mode when openssl is generating the tls ticket key?

OK, I've launched vanilla haproxy 1.6.4 from Debian testing and I
believe I know what is going on.

If I configure a single listening socket, like this:
  bind :443 ssl alpn http/1.1 crt /etc/ssl/snakeoil.pem
everything works fine, including tickets.

However, if I configure multiple listening sockets, to take advantage
of SO_REUSEPORT (and that is exactly what I have on my production
haproxy 1.5):
  bind :443 process 1 ssl alpn http/1.1 crt /etc/ssl/snakeoil.pem
  bind :443 process 2 ssl alpn http/1.1 crt /etc/ssl/snakeoil.pem
  bind :443 process 3 ssl alpn http/1.1 crt /etc/ssl/snakeoil.pem
  bind :443 process 4 ssl alpn http/1.1 crt /etc/ssl/snakeoil.pem
Then tickets do not work properly. Session ID based resumption works
correctly in both cases, which might be a bit confusing for users.

Obviously, on 1.6 I can use tls-ticket-keys which makes tickets work
properly in all cases.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-03-31 Thread Janusz Dziemidowicz
2016-03-30 21:22 GMT+02:00 Lukas Tribus :
> Hi Janusz,
>
>> So there is no difference. Session ID based resumption works ok,
>> ticket based resumption is kinda broken in both versions. Are tickets
>> supposed to work properly with nbproc>1?
>
> I just tested it here, ticket based resumption works fine for me with
> nbproxy>1 in both 1.5.16 and current 1.7 head.
>
> Since you are also seeing it in 1.5.15, that doesn't seem to be the
> cause of this problem, but its is something you will have to fix
> because the CPU impact of broken resumption is plenty.
>
>
> You can disable tls ticket for now, since you probably want to
> troubleshoot the first issue, as per Nenad's suggestion.

About the CPU problem. Reverting 7610073a indeed fixes my problem. If
anyone has any idea what is the problem with this commit I am willing
to test patches:)
Some more details about my setup. All servers have moderate traffic
(200-500mbit/s in peak). I do both plain HTTP and HTTPS + some small
traffic in TCP mode (also both with and without TLS). I also make an
extensive use of unix sockets for HTTP/2 support (decrypted HTTP/2
traffic is routed via unix socket to nghttpx and then arrives back on
another socket as HTTP/1.1).

I am well aware that broken resumption is a bad thing. However, I've
looked though haproxy 1.5 code and I quite don't understand how
tickets are supposed to work with nbproc>1. The only code related to
TLS tickets in 1.5 is the code to disable them. Otherwise OpenSSL
defaults are used, which means OpenSSL will generate a random key to
encrypt/decrypt tickets. Unless I've missed something it means that
each haproxy process will have different keys and tickets will not
work across different processes.
Are you sure that during your tests traffic hit at least two different
processes? If a single one accepted all the connections then
resumption with tickets will work, it will break as soon as another
process accepts resumption attempt.

-- 
Janusz Dziemidowicz



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-03-30 Thread Janusz Dziemidowicz
│ XXX.XXX.XXX.XX │   3 │ ECDHE-RSA-AES128-SHA  │
✘   │ AB76C9FA3BB151B455… │ 8E2FDA0F51FDC8504A… │   ✔│ HTTP/1.1
404 Not Found
│ XXX.XXX.XXX.XX │   4 │ ECDHE-RSA-AES128-SHA  │
✔   │ AB76C9FA3BB151B455… │ 8E2FDA0F51FDC8504A… │   ✔│ HTTP/1.1
404 Not Found
[✔] Dump results to file.

So there is no difference. Session ID based resumption works ok,
ticket based resumption is kinda broken in both versions. Are tickets
supposed to work properly with nbproc>1 ?

>> I have also Certificate Transparency patch applied, backported from
>> 1.6.
>
> Can you try without it?

Again, will try tomorrow. As an author of this patch I'm kinda sure
that it is irrelevant, but I might well be biased. Will try next thing
tomorrow morning:)

-- 
Janusz Dziemidowicz


Re: General SSL vs. non-SSL Performance

2016-03-19 Thread Janusz Dziemidowicz
2016-03-17 20:48 GMT+01:00 Aleksandar Lazic :
> Hm I'm not sure if understand this right.
> I will try to repeat just to check if I have understand it righ.
>
> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.1-tls-ticket-keys
>
> #
> frontend ssl
>   bind :443 ssl tls-ticket-keys /myramdisk/ticket-file <= this is a local
> file right
>   stick-table type binary len ?? 10m expire 12h store ??? if {
> req.ssl_st_ext 1 }
> ##
>
> could this pseudo conf snippet work?
> What I don't understand is HOW the tls ticket 'distributed to all HAproxy
> servers' with the current haproxy options.

If this local file is the same on two servers then those two servers
can both resume the same session. Session state is stored on the
client (encrypted by the contents of "this local file"). There is no
need to distribute anything apart this local file. The downside is
that not all clients support this.

-- 
Janusz Dziemidowicz



Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-12-10 Thread Janusz Dziemidowicz
2015-12-10 21:14 GMT+01:00 Dave Zhu (yanbzhu) :
> Finished OCSP portion. It’s in patch 5
>
> OCSP staple files will have to be in the same format: haproxy.pem.rsa.ocsp
> and haproxy.pem.ecdsa.ocsp. They will get picked up when you load
> haproxy.pem in any of the supported methods.
>
> This patch is slightly bigger, as there was some refactoring that had to
> be done to support multi-cert SSL_CTX’s.
>
> The only remaining piece would be SCTL support, and I have no experience
> with that. Someone else will have to step in to add that functionality.

I haven't been following this thread closely, but SCTL should be very
similar to OCSP. SCTL stands for signed certificate timestamp list and
is just a simple list of signatures from Certificate Transparency
logs. This is just a binary blob tied to a given certificate. If the
client includes CT extension, then the server should locate apropriate
SCTL (haproxy.pem.rsa.sctl or haproxy.pem.ecdsa.sctl) and include it
in its initial reply. That's all.

I'll try to take a look at the patch set in the following weekend if I
find some time.

-- 
Janusz Dziemidowicz



Re: Owncloud through Haproxy makes upload not possible

2015-11-19 Thread Janusz Dziemidowicz
2015-11-19 15:45 GMT+01:00 Piotr Kubaj :
> Now, about RSA vs ECDSA. I simply don't trust ECDSA. There are quite a
> lot of questions about constants used by ECDSA, which seem to be
> chosen quite arbitrarily by its creator, which happens to be NSA.
> These questions of course remain unanswered. Even respected scientists
> like Schneier say that RSA should be used instead (see
> https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c167
> 5929

But ECDSA itself does not contain any constants (see
https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm).
Yes, you have to choose domain parameters and most commonly used are
NIST ones. But you can also use brainpool curves, which specifically
avoid using any arbitrary constants (see
http://www.ecc-brainpool.org/download/Domain-parameters.pdf) and they
are even defined for TLS (https://tools.ietf.org/html/rfc7027) and
apparently supported by latest OpenSSL. Unfortunately not by anything
else.
OK, anyway that's your preference, I'm not going to argue about ECDSA or not;)

> ). When I'm done setting my HTTP(S) services, I'll simply limit
> incoming connections connections on my firewall so DDOS'ing won't be
> possible, unless you DDOS my firewall :)

I've never said anything about DDoS. In such setup there is no need
for distributed DoS. The CPU usage of RSA 8192 is so high that a
single shell script running on a single attack machine can kill any
server.
If you are willing to limit your connection rate on a firewall to a
few per second, then fine;)

As for your problem. Now that it seems like SSL problem, can you just
try with RSA 4096 or 2048? RSA 8192 is really not much tested in most
code, so maybe the problem is in fact related.

-- 
Janusz Dziemidowicz



Re: Owncloud through Haproxy makes upload not possible

2015-11-19 Thread Janusz Dziemidowicz
2015-11-19 11:13 GMT+01:00 Piotr Kubaj :
>> 4096 bit DH params will be pretty slow to handshake. Maybe that's
>> okay in your circumstance though since you seem to be using this
>> for a personal use and not expecting a high connection rate. You
>> also have a 8 kbit RSA self signed certificate and using 256 bit
>> ciphers which increase TLS overhead.
> I want it to be secure, and I don't want to touch my settings for
> quite a while so I just took the strongest algorithms there are, and
> 2x recommended values for things like private key, or DH params. The
> hardware is pretty powerful and I've already checked that I don't have
> a huge load.

Take note, that increasing RSA size twice reduces number of
connections you can accept 10 times or more.
For example, my quite powerful desktop with recent CPU can accept 973
connections per second per core for RSA 2048, 136 connections per
second for RSA 4096. OpenSSL does not have tests for RSA 8192, but
that would be in the order of a few connections per second. RSA 8192
is really an overkill, it would be possible to DoS your server with a
simple shell script;) If you want a state of the art cryptography you
should probably use ECDSA certificate, it will be both faster and more
secure.

-- 
Janusz Dziemidowicz



Re: Owncloud through Haproxy makes upload not possible

2015-11-18 Thread Janusz Dziemidowicz
2015-11-18 19:45 GMT+01:00 Bryan Talbot :
> AFAIK, HPKP is only somewhat supported by only the most recent browser
> releases. I believe that it's also ignored by them for certificates which
> are self-signed or signed by a CA that is not in the browsers system-defined
> CA set. Probably doesn't cause your issue but who knows -- it is still
> experimental.

There is also one more detail people often miss about HPKP. In order
for HPKP to work, you MUST have a backup pin, that is a pin for a
certificate that is offline. That means at least two pins, otherwise
this whole header is ignored. See RFC7469 section 2.5. Also use tools
in browsers, like Chrome net internals, to verify that it is correctly
noted by the browser.

-- 
Janusz Dziemidowicz



Re: haproxy 1.5.4 with ssl-bridging

2015-09-30 Thread Janusz Dziemidowicz
2015-09-29 21:36 GMT+02:00 Douglas Harmon :
> Hello group. I'm new to haproxy. I have read the documentation but
> still require some assistance. I'm trying to configure haproxy to:
>
> 1. accept https connection with client certs required.
> 2. pass the client cert to a backend https server based on https url path
>
> First, can I accomplish this with haproxy? If so, could someone share
> a sample haproxy 1.5 configuration? I have the item 1 above working in
> tcp mode. But I believe I need to be in http mode to get item 2 to
> work.

This is not possible. This is not a haproxy limitation, this is
impossible to do with SSL as you effectively are trying to perform a
man-in-the-middle attack and SSL is designed to prevent exactly that.

You can either:
1. require client SSL cert on haproxy and decrypt traffic to see URL,
but you cannot "forward" client certificate to the backend
2. configure haproxy in TCP mode and forward encrypted traffic to the
backend, but you cannot se the URL

You cannot have both, SSL protocol does not allow such operation.

What you can do, which is usually what people want is to implement 1
and set custom HTTP header with client certificate details (search
haproxy documentation for X-SSL-Client-CN for example). Your backend
will not see client certificate in a SSL handshake, but can access the
header for certificate information.

-- 
Janusz Dziemidowicz



Re: [PATCH] Certificate Transparency support

2015-03-07 Thread Janusz Dziemidowicz
2015-03-07 10:19 GMT+01:00 Emeric Brun :
> Hi Janusz,
>
>> +   *sctl = calloc(1, sizeof(struct chunk));
>> +   if (!*sctl)
>> +   goto end;
>> +   if (!chunk_dup(*sctl, &trash)) {
>
> * maybe here

If chunk_dup fails then the destination is not allocated, so I believe
it is not necessary.

>>
>> +   free(*sctl);
>> +   *sctl = NULL;
>> +   goto end;
>> +   }
>> +
>> +   ret = ssl_sock_parse_sctl(*sctl);
>> +   if (ret) {
>
> * and definitely here

Sure.

>> +   free(*sctl);
>> +   *sctl = NULL;
>> +   goto end;
>> +   }
>
> a call to chunk_destroy seems to be missing.
>
> For the rest, the patch has my approval.

I'll send updated patch shortly. I've changed this so that SCTL is
first parsed from trash and only then copied. Makes it a bit shorter.

-- 
Janusz Dziemidowicz



Re: [PATCH] Certificate Transparency support

2015-03-06 Thread Janusz Dziemidowicz
2015-03-05 21:35 GMT+01:00 Willy Tarreau :
> Well, I don't know if it's the right way to implement it, I'll let the
> SSL experts review your work. However what I can say is that it's the
> right way to write and submit a patch for quick inclusion. Your code is
> very clean is the doc is provided as well. Good job for a first patch!
>
> Concerning 1.5, we avoid backporting features into 1.5 to avoid reproducing
> the mess that 1.4 was with regressions. That said, we seldom make a few
> exceptions when the feature addresses an ongoing problem to expect soon.
> Here I don't think it's the case, but if everyone thinks it would be nice
> to have it there, users decide :-)

No problem, I've just mentioned it for completeness. Currently
Certificate Transparency is required by Chrome only for EV
certificates issued in 2015. Most major CAs already embed SCTs in
issued certificates (for example see certificate at
https://www.digicert.com/). So this patch is of interest mainly for
people having EV certificate from CA not participating in CT. This
patch also requires OpenSSL 1.0.2, which was released just recently,
so not many users will push for this:)

-- 
Janusz Dziemidowicz