Re: HAProxy 1.8.X crashing

2018-04-11 Thread Willy Tarreau
Hi Olivier,

On Wed, Apr 11, 2018 at 05:29:15PM +0200, Olivier Houchard wrote:
> From 7c9f06727cf60acf873353ac71283ff9c562aeee Mon Sep 17 00:00:00 2001
> From: Olivier Houchard 
> Date: Wed, 11 Apr 2018 17:23:17 +0200
> Subject: [PATCH] BUG/MINOR: connection: Setup a mux when in proxy mode.
> 
> We were allocating a new connection when in proxy mode, but did not provide
> it a mux, thus crashing later.
> 
> This should be backported to 1.8.
> ---
>  src/proto_http.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/src/proto_http.c b/src/proto_http.c
> index 80e001d69..817692c48 100644
> --- a/src/proto_http.c
> +++ b/src/proto_http.c
> @@ -62,6 +62,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -3718,6 +3719,8 @@ int http_process_request(struct stream *s, struct 
> channel *req, int an_bit)
>  
>   return 0;
>   }
> + /* XXX: We probably need a better mux */
> + conn_install_mux(conn, &mux_pt_ops, objt_cs(s->si[1].end));
>  
>   path = http_get_path(txn);
>   url2sa(req->buf->p + msg->sl.rq.u,

While I can understand how missing this can have caused trouble, I'm not
sure it's the best solution, and I'm a bit worried about having various
code places choose a mux and install it. I suspect we're "simply" missing
a test in the backend code to cover the case where the connection already
exists but a mux was not yet installed (a situation that didn't exist
prior to muxes which is why we didn't notice this). I think that this
happens in connect_server() around line 1177 (return SF_ERR_INTERNAL)
when conn_xprt_ready() indicates the transport was not yet initialized.
It's likely that the error unrolling fails to consider the case where
the mux wasn't initialized, leading to the crash Praveen experienced.

If this is right, it would mean two fixes :
  - one in the error unrolling path to ensure we don't destroy a non
allocated mux ;
  - one above the return SF_ERR_INTERNAL above in connect_server() to
handle the case where the connection already exists but not the mux.

What do you think ?

Willy



Re: 1.8.7 http-tunnel doesn't seem to work? (but default http-keep-alive does)

2018-04-11 Thread Willy Tarreau
Hi Pieter,

On Thu, Apr 12, 2018 at 12:16:21AM +0200, PiBa-NL wrote:
> Hi List / Willy,
> 
> Removing the line below 'fixes' my issue with kqueue poller and NTLM
> authentication with option http-tunnel..
> Though i'm sure something is then broken then horribly also (CPU go's
> 100%..). And i'm not sure what the proper fix would be. (ive got to little
> knowledge of what the various flags do and C++ aint a language i normally
> ever look at.. )

Don't worry for this, it's already great that you went that far!

> The 'breaking' commit was this one: 
> http://git.haproxy.org/?p=haproxy-1.8.git;a=commit;h=f839593dd26ec210ba66d74b2a4c2040dd1ed806
> 
> Can you take a new look at that piece of code? (as the commit was yours ;) )
> Thanks in advance :).

Thank you very much for pointing the exact line that causes you trouble.
I'm pretty sure that your fix breaks something else and causes some events
to possibly be missed in some cases, but at the moment I'm having a hard
time figuring the details. It could be that an event (stop reading) is
improperly reported through the mux in tunnel mode.

Would you have the ability to try the latest 1.9-dev just by chance ? I'm
interested, since the FDs work a bit differently there. If it happens not
to malfunction it could help us compare the behaviours between the two to
more easily spot the culprit.

Thanks!
Willy



Re: 1.8.7 http-tunnel doesn't seem to work? (but default http-keep-alive does)

2018-04-11 Thread PiBa-NL

Hi List / Willy,

Removing the line below 'fixes' my issue with kqueue poller and NTLM 
authentication with option http-tunnel..
Though i'm sure something is then broken then horribly also (CPU go's 
100%..). And i'm not sure what the proper fix would be. (ive got to 
little knowledge of what the various flags do and C++ aint a language i 
normally ever look at.. )
The 'breaking' commit was this one: 
http://git.haproxy.org/?p=haproxy-1.8.git;a=commit;h=f839593dd26ec210ba66d74b2a4c2040dd1ed806


Can you take a new look at that piece of code? (as the commit was yours ;) )
Thanks in advance :).

Regards,
PiBa-NL (Pieter)

 src/ev_kqueue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/ev_kqueue.c b/src/ev_kqueue.c
index a103ece..49e7302 100644
--- a/src/ev_kqueue.c
+++ b/src/ev_kqueue.c
@@ -78,7 +78,7 @@ REGPRM2 static void _do_poll(struct poller *p, int exp)
         else if (fdtab[fd].polled_mask & tid_bit)
             EV_SET(&kev[changes++], fd, EVFILT_WRITE, EV_DELETE, 
0, 0, NULL);


-            HA_ATOMIC_OR(&fdtab[fd].polled_mask, tid_bit);
+//            HA_ATOMIC_OR(&fdtab[fd].polled_mask, tid_bit);
     }
 }
 if (changes)



Op 10-4-2018 om 23:11 schreef PiBa-NL:

Hi Haproxy List,

I upgraded to 1.8.7 (coming from 1.8.3) and found i could no-longer 
use one of our IIS websites. The login procedure thats using windows 
authentication / ntlm seems to fail..
Removing option http-tunnel seems to fix this though. Afaik 
http-tunnel 'should' switch to tunnelmode after the first request and 
as such should have no issue sending the credentials the the server.?.


Below are:  config / haproxy -vv / tcpdump / sess all

Is it a known issue? Is there anything else i can provide?

Regards,

PiBa-NL (Pieter)

-
# Automaticaly generated, dont edit manually.
# Generated on: 2018-04-10 21:00
global
    maxconn            1000
    log            192.168.8.10    local1    info
    stats socket /tmp/haproxy.socket level admin
    gid            80
    nbproc            1
    nbthread            1
    hard-stop-after        15m
    chroot                /tmp/haproxy_chroot
    daemon
    tune.ssl.default-dh-param    2048
    defaults
    option log-health-checks


frontend site.domain.nl2
    bind            192.168.8.5:443 name 192.168.8.5:443  ssl  crt 
/var/etc/haproxy/site.domain.nl2.pem crt-list 
/var/etc/haproxy/site.domain.nl2.crt_list

    mode            http
    log            global
    option            httplog
    option            http-tunnel
    maxconn            100
    timeout client        1h
    option tcplog
    default_backend website-intern_http_ipvANY

backend site-intern_http_ipvANY
    mode            http
    log            global
    option            http-tunnel
    timeout connect        10s
    timeout server        1h
    retries            3
    server            site 192.168.13.44:443 ssl  weight 1.1 verify none

-
[2.4.3-RELEASE][root@pfsense_5.local]/root: haproxy -vv
HA-Proxy version 1.8.7 2018/04/07
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-address-of-packed-member 
-Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 
USE_ACCEPT4=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with Lua version : Lua 5.3.4
Built with OpenSSL version : OpenSSL 1.0.2m-freebsd  2 Nov 2017
Running on OpenSSL version : OpenSSL 1.0.2m-freebsd  2 Nov 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available filters :
    [TRACE] trace
    [COMP] compression
    [SPOE] spoe
-
tcpdump of : Client 8.32>Haproxy 8.5:

21:09:13.452118 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [S], 
seq 1417754656, win 8192, options [mss 1260,nop,wscale 
8,nop,nop,sackOK], length 0
21:09:13.452312 IP 192.168.8.5.443 > 192.168.8.32.51

Re: [PATCH 1/5]: random generator functions wrapper

2018-04-11 Thread Aleksandar Lazic
As far as I have your patch understood it looks to me that a new random value 
should be returned.

As I'm not an expert I just want to ask if this new feature makes here sense.

I have seen this flag GRND_NONBLOCK have anyone experience with this function 
in no blocking application?

Best regards
Aleks



 Ursprüngliche Nachricht 
Von: David CARLIER 
Gesendet: 11. April 2018 19:41:30 MESZ
An: Aleksandar Lazic 
CC: Haproxy , Willy TARREAU 
Betreff: Re: [PATCH 1/5]: random generator functions wrapper

Plus not sure it really fits the usage here after second thoughts. Regards.

On Wed 11 Apr 2018 6:36 PM David CARLIER  wrote:

> Hi aleks. Getrandom is available since 3.17 kernel version (I think
> correct me if I m wrong) as a syscall. Plus good care needs for its call in
> non blocking mode I think. Cheers.
>
> On Wed 11 Apr 2018 6:26 PM Aleksandar Lazic  wrote:
>
>> Hi David.
>>
>> How about to use getrandom instead of random?
>>
>> http://man7.org/linux/man-pages/man2/getrandom.2.html
>> https://lwn.net/Articles/711013/
>>
>> Best regards
>> Aleks
>>
>>
>> --
>> *Von:* David CARLIER 
>> *Gesendet:* 11. April 2018 18:33:46 MESZ
>> *An:* Haproxy , Willy TARREAU > >
>> *Betreff:* [PATCH 1/5]: random generator functions wrapper
>>
>> Hi dear list,
>>
>> Here a patch proposal to have a wrapper around rand/random calls.
>>
>> If it s ok with this one I ll send the rest.
>>
>> Thanks.
>>
>> Kind regards.
>>
>


Re: [PATCH 1/5]: random generator functions wrapper

2018-04-11 Thread David CARLIER
Plus not sure it really fits the usage here after second thoughts. Regards.

On Wed 11 Apr 2018 6:36 PM David CARLIER  wrote:

> Hi aleks. Getrandom is available since 3.17 kernel version (I think
> correct me if I m wrong) as a syscall. Plus good care needs for its call in
> non blocking mode I think. Cheers.
>
> On Wed 11 Apr 2018 6:26 PM Aleksandar Lazic  wrote:
>
>> Hi David.
>>
>> How about to use getrandom instead of random?
>>
>> http://man7.org/linux/man-pages/man2/getrandom.2.html
>> https://lwn.net/Articles/711013/
>>
>> Best regards
>> Aleks
>>
>>
>> --
>> *Von:* David CARLIER 
>> *Gesendet:* 11. April 2018 18:33:46 MESZ
>> *An:* Haproxy , Willy TARREAU > >
>> *Betreff:* [PATCH 1/5]: random generator functions wrapper
>>
>> Hi dear list,
>>
>> Here a patch proposal to have a wrapper around rand/random calls.
>>
>> If it s ok with this one I ll send the rest.
>>
>> Thanks.
>>
>> Kind regards.
>>
>


Re: [PATCH 1/5]: random generator functions wrapper

2018-04-11 Thread David CARLIER
Hi aleks. Getrandom is available since 3.17 kernel version (I think correct
me if I m wrong) as a syscall. Plus good care needs for its call in non
blocking mode I think. Cheers.

On Wed 11 Apr 2018 6:26 PM Aleksandar Lazic  wrote:

> Hi David.
>
> How about to use getrandom instead of random?
>
> http://man7.org/linux/man-pages/man2/getrandom.2.html
> https://lwn.net/Articles/711013/
>
> Best regards
> Aleks
>
>
> --
> *Von:* David CARLIER 
> *Gesendet:* 11. April 2018 18:33:46 MESZ
> *An:* Haproxy , Willy TARREAU 
> *Betreff:* [PATCH 1/5]: random generator functions wrapper
>
> Hi dear list,
>
> Here a patch proposal to have a wrapper around rand/random calls.
>
> If it s ok with this one I ll send the rest.
>
> Thanks.
>
> Kind regards.
>


Re: [PATCH 1/5]: random generator functions wrapper

2018-04-11 Thread Aleksandar Lazic
Hi David.

How about to use getrandom instead of random?

http://man7.org/linux/man-pages/man2/getrandom.2.html
https://lwn.net/Articles/711013/

Best regards
Aleks



 Ursprüngliche Nachricht 
Von: David CARLIER 
Gesendet: 11. April 2018 18:33:46 MESZ
An: Haproxy , Willy TARREAU 
Betreff: [PATCH 1/5]: random generator functions wrapper

Hi dear list,

Here a patch proposal to have a wrapper around rand/random calls.

If it s ok with this one I ll send the rest.

Thanks.

Kind regards.


Re: haproxy=1.8.5 stuck in thread syncing

2018-04-11 Thread Максим Куприянов
Hi!

Thank you very much for the patches. Looks like they helped.

2018-03-29 14:25 GMT+05:00 Christopher Faulet :

> Le 28/03/2018 à 14:16, Максим Куприянов a écrit :
>
>> Hi!
>>
>> I'm sorry but configuration it's too huge too share (over 100 different
>> proxy sections). This is also the reason I can't exactly determine the
>> failing section. Is there a way to get this data from core-file?
>>
>> 2018-03-28 11:18 GMT+03:00 Christopher Faulet > >:
>>
>> Le 28/03/2018 à 09:36, Максим Куприянов a écrit :
>>
>> Hi!
>>
>> Yesterday one of our haproxies (1.8.5) with nbthread=8 set in
>> its config stuck with 800% CPU usage. Some responses were served
>> successfully but many of them just timed out. perf top showed
>> this:
>>59.19%  [.] thread_enter_sync
>>32.68%  [.] fwrr_get_next_server
>>
>>
>> Hi,
>>
>> Could you share your configuration please ? It will help to diagnose
>> the problem. In your logs, what is the values of srv_queue and
>> backend_queue fields ?
>>
>>
> Hi,
>
> Ok, I partly reproduce your problem using a backend, with an hundred
> servers and a maxconn to 2 for each one. In this case, I observe same CPUs
> consumption. I have no timeouts (it probably depends on your values) but
> performances are quite low.
>
> I think you're hitting a limitation of the current design. We have no
> mechanism to migrate entities between threads. So to force threads wakeup,
> we use the sync point. It was not designed to be called very often. In your
> case, it eats all the CPU.
>
> I attached 3 patches. They add a mechanism to wakeup threads selectively
> without any lock or loop. They must be applied on HAProxy 1.8 (it will not
> work on the upstream). So you can check if it fixes your problem or not. It
> will be useful to validate it is a design limitation and not a bug.
>
> This is just an experimentation. I hope it works well but I didn't do a
> lot of testing. If yes, I'll then discuss with Willy if it is pertinent or
> not to do the threads wakeup this way. But, in all cases, it will probably
> not be backported in HAProxy 1.8.
>
> --
> Christopher Faulet
>


[PATCH 1/5]: random generator functions wrapper

2018-04-11 Thread David CARLIER
Hi dear list,

Here a patch proposal to have a wrapper around rand/random calls.

If it s ok with this one I ll send the rest.

Thanks.

Kind regards.
From 01b29c12410dba2797815aed1be602abc435902e Mon Sep 17 00:00:00 2001
From: David Carlier 
Date: Wed, 11 Apr 2018 17:20:49 +0100
Subject: [PATCH 1/5] BUILD/MEDIUM: standard: my_srand*/my_rand* functions.

BSD families systems have strong random number functions
generated with Chacha family algorithms. So we can use it
where it seems fit, other systems ought not be affected.
---
 include/common/standard.h | 44 
 1 file changed, 44 insertions(+)

diff --git a/include/common/standard.h b/include/common/standard.h
index 6542759d9..2b1fa133b 100644
--- a/include/common/standard.h
+++ b/include/common/standard.h
@@ -839,6 +839,50 @@ static inline unsigned int my_ffsl(unsigned long a)
 	return cnt;
 }
 
+/* my_random call seeding */
+static inline void my_srandom(unsigned int seed)
+{
+#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__)
+	(void)seed;
+#else
+	srandom(seed);
+#endif
+}
+
+/* Generates long random value */
+static inline long my_random(void)
+{
+#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__)
+	long value;
+	arc4random_buf(&value, sizeof(value));
+	return value;
+#else
+	return random();
+#endif
+}
+
+/* my_rand call seeding */
+static inline void my_srand(unsigned int seed)
+{
+#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__)
+	(void)seed;
+#else
+	srand(seed);
+#endif
+}
+
+/* Generates long random value */
+static inline int my_rand(void)
+{
+#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__)
+	int value;
+	arc4random_buf(&value, sizeof(value));
+	return value;
+#else
+	return rand();
+#endif
+}
+
 /* Build a word with the  lower bits set (reverse of my_popcountl) */
 static inline unsigned long nbits(int bits)
 {
-- 
2.16.2



Re: HAProxy 1.8.X crashing

2018-04-11 Thread Olivier Houchard
Hi Praveen,

On Wed, Apr 11, 2018 at 02:16:28PM +, UPPALAPATI, PRAVEEN wrote:
> Hi Haproxy-Team,
> 
> I tried compiling different minor versions of 1.8.x releases and all the 
> minor versions are crashing whe trying to use option http-proxy:
> 
> Configuration that is causing issue:
> 
> listen http_proxy-
> bind *:9876
> mode http
> option httplog
> http-request set-uri 
> http://%[url_param(idnsredirHost)]%[capture.req.uri]
> option http_proxy
> 
> If I don't use option http_proxy things work normally. Following is from the 
> core dump:
> 
> : #0 0x00454839 in cs_destroy (cs=0x207edd0) at 
> include/proto/connection.h:704
> 704 cs->conn->mux->detach(cs);
> Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7.x86_64 
> keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 
> libcom_err-1.42.9-10.el7.x 86_64 libselinux-2.5-11.el7.x86_64 
> openssl-libs-1.0.2k-8.el7.x86_64 pcre-8.32-17.el7.x86_64 
> zlib-1.2.7-17.el7.x86_64
> (gdb) bt
> #0 0x00454839 in cs_destroy (cs=0x207edd0) at 
> include/proto/connection.h:704
> #1 si_release_endpoint (si=0x2083540) at include/proto/stream_interface.h:162
> #2 stream_free (s=0x20832e0) at src/stream.c:398
> #3 process_stream (t=) at src/stream.c:2513
> #4 0x004bc38e in process_runnable_tasks () at src/task.c:229
> #5 0x00408d9c in run_poll_loop () at src/haproxy.c:2399
> #6 run_thread_poll_loop (data=) at src/haproxy.c:2461
> #7 main (argc=, argv=0x7ffe6e2cf2d8) at src/haproxy.c:3065
> (gdb) quit
> 
> 
[...]
> Please let me know what's the root cause this option works fine with 1.7.x 
> version.
> 

It's related to changes we made in the architecture in 1.8.
The attached patch should fix it. It was made for master, but should apply to
1.8 as well.

Thanks for reporting !

Olivier
>From 7c9f06727cf60acf873353ac71283ff9c562aeee Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Wed, 11 Apr 2018 17:23:17 +0200
Subject: [PATCH] BUG/MINOR: connection: Setup a mux when in proxy mode.

We were allocating a new connection when in proxy mode, but did not provide
it a mux, thus crashing later.

This should be backported to 1.8.
---
 src/proto_http.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/proto_http.c b/src/proto_http.c
index 80e001d69..817692c48 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -62,6 +62,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -3718,6 +3719,8 @@ int http_process_request(struct stream *s, struct channel 
*req, int an_bit)
 
return 0;
}
+   /* XXX: We probably need a better mux */
+   conn_install_mux(conn, &mux_pt_ops, objt_cs(s->si[1].end));
 
path = http_get_path(txn);
url2sa(req->buf->p + msg->sl.rq.u,
-- 
2.14.3



HAProxy 1.8.X crashing

2018-04-11 Thread UPPALAPATI, PRAVEEN
Hi Haproxy-Team,

I tried compiling different minor versions of 1.8.x releases and all the minor 
versions are crashing whe trying to use option http-proxy:

Configuration that is causing issue:

listen http_proxy-
bind *:9876
mode http
option httplog
http-request set-uri 
http://%[url_param(idnsredirHost)]%[capture.req.uri]
option http_proxy

If I don't use option http_proxy things work normally. Following is from the 
core dump:

: #0 0x00454839 in cs_destroy (cs=0x207edd0) at 
include/proto/connection.h:704
704 cs->conn->mux->detach(cs);
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7.x86_64 
keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 
libcom_err-1.42.9-10.el7.x 86_64 libselinux-2.5-11.el7.x86_64 
openssl-libs-1.0.2k-8.el7.x86_64 pcre-8.32-17.el7.x86_64 
zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0 0x00454839 in cs_destroy (cs=0x207edd0) at 
include/proto/connection.h:704
#1 si_release_endpoint (si=0x2083540) at include/proto/stream_interface.h:162
#2 stream_free (s=0x20832e0) at src/stream.c:398
#3 process_stream (t=) at src/stream.c:2513
#4 0x004bc38e in process_runnable_tasks () at src/task.c:229
#5 0x00408d9c in run_poll_loop () at src/haproxy.c:2399
#6 run_thread_poll_loop (data=) at src/haproxy.c:2461
#7 main (argc=, argv=0x7ffe6e2cf2d8) at src/haproxy.c:3065
(gdb) quit



HaProxy Install Details:

HA-Proxy version 1.8.4-1deb90d 2018/02/08
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label
  OPTIONS = USE_LIBCRYPT=1 USE_ZLIB=1 USE_THREAD=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

Please let me know what's the root cause this option works fine with 1.7.x 
version.

Thanks,
Praveen.




Re: Segfault in haproxy v1.8 with Lua

2018-04-11 Thread Hessam Mirsadeghi
Hi Christopher,

You're right; that segfault happens with the build at the faulty commit and
not later versions such as v1.8.5.
However, version v1.8.5 does segfault with the attached modified Lua
script. As far as I can tell, the problem arises after any call to
"txn.res:set()".

In the attached Lua script, if you remove the call to either of
"txn.res:set(txn.res:get())" or "txn.res:forward(txn.res:get_in_len())",
the segfault will disappear.
Also, when I only have a call to "txn.res:set(txn.res:get())" in the
script, haproxy becomes unresponsive to all but the first request on each
persistent connection. That is, something like "curl -sig localhost:80
localhost:80" will only get the response for the first request; the second
one times out on the existing connection and succeeds only on a a second
connection established by curl.

--- Here is the output of "haproxy -vv" for the new segfault
---

HA-Proxy version 1.8.5 2018/03/23
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-format-truncation -Wno-null-dereference
-Wno-unused-label
  OPTIONS = USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0h-fips  27 Mar 2018
Running on OpenSSL version : OpenSSL 1.1.0h-fips  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.41 2017-07-05
Running on PCRE version : 8.41 2017-07-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace


Best,
Seyed


On Wed, Apr 11, 2018 at 5:46 AM, Christopher Faulet 
wrote:

> Le 11/04/2018 à 01:31, Hessam Mirsadeghi a écrit :
>
>> Hi,
>>
>> I have a simple Lua http-response action script that leads to
>> segmentation fault in haproxy. The Lua script is a simple call
>> to txn.res:forward(0).
>> A sample haproxy config and the Lua script files are attached. The
>> backend is simply an nginx instance which responds with 204 No Content.
>>
>> The commit that introduces this problem is:
>> commit 8a5949f2d74c3a3a6c6da25449992c312b183ef3
>>  BUG/MEDIUM: http: Switch the HTTP response in tunnel mode as earlier
>> as possible
>>
>> Any ideas?
>>
>>
> Hi,
>
> I'm unable to reproduce the segfault using your example. Could you provide
> the output of "haproxy -vv" and the full backtrace of your segfault ?
>
> Regards,
>
> --
> Christopher Faulet
>
function foo(txn)
txn.res:set(txn.res:get())
txn.res:forward(txn.res:get_in_len())
end

core.register_action("foo", {"http-res"}, foo)


SSL/TLS support for peers

2018-04-11 Thread Frederic Lecaille

Hello ML,

This is a first patch attempt to add the SSL/TLS support to peers.

Everything is detailed in the commit log. This patch is not supposed to 
be integrated right now because the documentation is missing. 
Furthermore there are remaining SSL/TLS keywords to be supported which 
must be identified. Any advice would be appreciated.


Fred
>From 0bc8780e90140b638644c3912ba22f964143c130 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Wed, 11 Apr 2018 14:04:26 +0200
Subject: [PATCH] MINOR: peers: Add SSL/TLS support.

As all peers belonging to the same "peers" section share the
same SSL/TLS settings, these latter are provided on lines which
may potentially be the last ones of a "peers" section. Such lines
are not identified by their first words. Perhaps this should be the case.

So, this patch extracts the current code used to setup the connection binding
of each frontend for each "peers" section so that it can be used after
having completely parsed "peers" sections. A "bind_conf" structure field
has been added to "peers" structures to do so. On the counterpart, a "server"
structure field has been also added to "peers" structure to store at parsing
time the SSL/TLS settings used to connect to remote peers.

Both these two new "peers" fields are used after having parsed the configuration
files to setup the SSL/TLS settings of all peers in the section  (local or remote).
This is the reason why these two new structure fields have been added to "peer"
structure.

This patch also adds "ssl" and "cert" two new keywords to basically enable
SSL/TLS usage to a "peers" section: "ssl" and "cert". Their syntaxes
are identical to the ones for "bind" or "server" lines.

Ex:
   # Enable SSL/TLS support for "my_peers" section peers
   peers my_peers
  peer foo1 ...
  peer foo2 ...
  ssl cert my/cert.pem
---
 include/proto/peers.h  |  67 ++
 include/proto/server.h |   1 +
 include/types/peers.h  |  24 
 src/cfgparse.c | 150 +++--
 src/peers.c|  38 -
 src/server.c   |   2 +-
 src/ssl_sock.c |  44 +++
 7 files changed, 293 insertions(+), 33 deletions(-)

diff --git a/include/proto/peers.h b/include/proto/peers.h
index 782b66e..c5f70c1 100644
--- a/include/proto/peers.h
+++ b/include/proto/peers.h
@@ -28,9 +28,76 @@
 #include 
 #include 
 
+#include 
+
+struct peers_kw *peers_find_kw(const char *kw);
+void peers_register_keywords(struct peers_kw_list *kwl);
+
 void peers_init_sync(struct peers *peers);
 void peers_register_table(struct peers *, struct stktable *table);
 void peers_setup_frontend(struct proxy *fe);
 
+#if defined(USE_OPENSSL)
+static inline enum obj_type *peer_session_target(struct peer *p, struct stream *s)
+{
+	if (p->srv.use_ssl)
+		return &p->srv.obj_type;
+	else
+		return &s->be->obj_type;
+}
+
+static inline struct xprt_ops *peers_fe_xprt(struct peers *peers)
+{
+	return peers->bind_conf.is_ssl ? xprt_get(XPRT_SSL) : xprt_get(XPRT_RAW);
+}
+
+static inline int peers_prepare_srvs(struct peers *peers)
+{
+	int ret;
+	struct peer *p;
+
+	if (!peers->srv.use_ssl)
+		return 0;
+
+	ret = 0;
+	for (p = peers->remote; p; p = p->next) {
+		struct xprt_ops *xprt_ops;
+
+		if (p->local)
+			continue;
+
+		p->srv.use_ssl = 1;
+		srv_ssl_settings_cpy(&p->srv, &peers->srv);
+		xprt_ops = xprt_get(XPRT_SSL);
+		if (!xprt_ops || !xprt_ops->prepare_srv)
+			continue;
+
+		p->srv.obj_type = OBJ_TYPE_SERVER;
+		/* These two following fields are required by ssl_sock API
+		 * error handling functions.
+		 */
+		p->srv.proxy = peers->peers_fe;
+		p->srv.id = p->id;
+		ret += xprt_ops->prepare_srv(&p->srv);
+	}
+	return ret;
+}
+#else
+static inline enum obj_type *peer_session_target(struct peer *p, struct stream *s)
+{
+	return &s->be->obj_type;
+}
+
+static inline struct xprt_ops *peers_fe_xprt(struct peers *p)
+{
+	return xprt_get(XPRT_RAW);
+}
+
+static inline int peers_prepare_srvs(struct peers *p)
+{
+	return 0;
+}
+#endif
+
 #endif /* _PROTO_PEERS_H */
 
diff --git a/include/proto/server.h b/include/proto/server.h
index 14f4926..0a4a035 100644
--- a/include/proto/server.h
+++ b/include/proto/server.h
@@ -49,6 +49,7 @@ void apply_server_state(void);
 void srv_compute_all_admin_states(struct proxy *px);
 int srv_set_addr_via_libc(struct server *srv, int *err_code);
 int srv_init_addr(void);
+void srv_ssl_settings_cpy(struct server *srv, struct server *src);
 struct server *cli_find_server(struct appctx *appctx, char *arg);
 void servers_update_status(void);
 
diff --git a/include/types/peers.h b/include/types/peers.h
index 58c8c4e..097223d 100644
--- a/include/types/peers.h
+++ b/include/types/peers.h
@@ -33,6 +33,19 @@
 #include 
 #include 
 
+struct peers_kw {
+	const char *kw;
+	int (*parse)(char **args, int *cur_arg, struct proxy *px,
+	 struct peers *peers, char **err);
+	int skip;
+};
+
+struct peers_kw_list {
+	const cha

HTTP/2 frames with websocket permessage-deflate option

2018-04-11 Thread Dave Cottlehuber
I've been taking HTTP/2 for a spin, using a phoenix[1] app with websockets. The 
basic "does it connect" works very well already (thank-you!) but I'm not sure 
if it's possible to enable per-frame compression within websockets or not -- or 
even intended?

My use case is to reduce the size of JSON blobs traversing a websocket 
connection, where a reasonable portion of frames contain almost-identical JSON  
from one to the next:

http/1.1 backend connection upgraded to websockets
   |
   | JSON blobs...
   |
haproxy
   |
   | JSON blobs...
   |
http/2 frontend to browser (using TLS obviously) 

I can see that my endpoints are requesting permessage-deflate option, but that 
haproxy is not returning that header back to indicate its support for it.

While haproxy has no way of knowing that a particular stream would benefit from 
compression or not,  the application developer *does* know, and I could ensure 
that compressible websocket requests use a different endpoint, or some form 
header + acl, to enable that, for example.

Some thoughts:

- in general, I prefer to keep away from compression over TLS because of BREACH 
and CRIME vulnerability classes
- this long-running websockets connection is particularly interesting for 
compression however as the compression tables are apparently maintained across 
sequential frames on the client

Is this something that might come in future releases, or do you feel its better 
left out due to compression overhead and vulnerability risks?

[1]: http://phoenixframework.org/

$ haproxy -vv
HA-Proxy version 1.8.6 2018/04/05
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow 
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1 
USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
Running on OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available filters :
[TRACE] trace
[COMP] compression
[SPOE] spoe



Re: Segfault in haproxy v1.8 with Lua

2018-04-11 Thread Christopher Faulet

Le 11/04/2018 à 01:31, Hessam Mirsadeghi a écrit :

Hi,

I have a simple Lua http-response action script that leads to 
segmentation fault in haproxy. The Lua script is a simple call 
to txn.res:forward(0).
A sample haproxy config and the Lua script files are attached. The 
backend is simply an nginx instance which responds with 204 No Content.


The commit that introduces this problem is:
commit 8a5949f2d74c3a3a6c6da25449992c312b183ef3
     BUG/MEDIUM: http: Switch the HTTP response in tunnel mode as 
earlier as possible


Any ideas?



Hi,

I'm unable to reproduce the segfault using your example. Could you 
provide the output of "haproxy -vv" and the full backtrace of your 
segfault ?


Regards,

--
Christopher Faulet