Re: [PATCH] MEDIUM: proxy_protocol: Send IPv4 addresses when possible

2018-07-17 Thread Willy Tarreau
Hi!

On Tue, Jul 17, 2018 at 01:39:38PM +0200, Lukas Tribus wrote:
> Hello Tim,
> 
> 
> On Fri, 29 Jun 2018 at 21:00, Tim Duesterhus  wrote:
> >
> > This patch changes the sending side of proxy protocol to convert IP
> > addresses to IPv4 when possible (and converts them IPv6 otherwise).
> >
> > Previously the code failed to properly provide information under
> > certain circumstances:
> >
> > 1. haproxy is being accessed using IPv4, http-request set-src sets
> >a IPv6 address.
> > 2. haproxy is being accessed using IPv6, http-request set-src sets
> >a IPv4 address.
> > 3. haproxy listens on `::` with v4v6 and is accessed using IPv4:
> >It would send a TCP6 line, instead of a proper TCP4 line, because
> >the IP addresses are representing as a mapped IPv4 address internally.
> >
> > Once correctness of this patch has been verified it should be evaluated
> > whether it should be backported, as (1) and (2) are bugs. (3) is an
> > enhancement.
> 
> Thanks for this, just a comment about nr 3:
> 
> A backend may rely on v4-mapped addresses for various reason, consider
> a backend that to simplify its handling of IP addresses only handles
> IPv6 and expects IPv4 addresses to be mapped.
> Also consider that to send native v4 addresses the admin only has to
> make a small adjustment in the bind configuration.
> 
> So since this would be a breaking change, and that the admin can
> easily reconfigure the bind line any time, I would advise against this
> and vote for maintaining the current behavior (where the bind
> configuration controls this behavior).

I must say I'm a bit reluctant about this change for the same reasons.
What I would suggest would be to only "upgrade" the addresses to IPv6
if either side already is on IPv6, but never downgrade from IPv6 to
IPv4 since v6-mapped v4 addresses can exist on both sides for a valid
reason.

I don't know what ppv2 does in this situation where source and destination
are in different classes, because initially it was not expected to happen
and this became possible after we introduced set-src :-/

Also I suspect we can have similar issues with unix domain sockets. Let's
say we have a listener on /var/haproxy-sockets/foo which accepts a
connection on which we do a set-src. I don't really know what happens
in this case if we want to send a PP header. Maybe we'd need to improve
the PP spec to be able to mention only one side when the other one is
unknown, though that would obviously not solve the case Tim tried to
address above.

Cheers,
Willy



Re: Building HAProxy 1.8 fails on Solaris

2018-07-17 Thread Thrawn
Mea culpa, I applied the patch incorrectly. After fixing that, I can 
successfully build with 'USE_THREAD=' but without 'USE_PTHREAD_PSHARED=yes' 
(although from what Olivier said, I probably shouldn't do that).   On 
Wednesday, 18 July 2018, 12:10:57 pm AEST, Thrawn 
 wrote:  
 
  We always clean before building. The shell script I use is currently:

MAKE=/usr/sfw/bin/gmake$MAKE clean$MAKE USE_STATIC_PCRE=1 ARCH=native 
TARGET=solaris PCREDIR=../pcre USE_THREAD= USE_PTHREAD_PSHARED=yes
And after applying the patch (and USE_PTHREAD_PSHARED=yes as you can see), that 
builds successfully :).
Dropping 'USE_THREAD=' causes a fast failure complaining about 
'__sync_fetch_and_add' and '__sync_sub_and_fetch'. Dropping 
'USE_PTHREAD_PSHARED=yes' results in an eventual failure:
gcc -Iinclude -Iebtree -Wall  -O2 -g -fno-strict-aliasing 
-Wdeclaration-after-statement -fwrapv     -Wno-unused-label 
-fomit-frame-pointer -DFD_SETSIZE=65536 -D_REENTRANT -D_XOPEN_SOURCE=500 
-D__EXTENSIONS__      -DTPROXY -DCONFIG_HAP_CRYPT -DNEED_CRYPT_H 
-DUSE_GETADDRINFO -DENABLE_POLL -DUSE_PCRE -I../pcre/include  
-DCONFIG_HAPROXY_VERSION=\"1.8.12-8a200c7\" 
-DCONFIG_HAPROXY_DATE=\"2018/06/27\" -c -o src/cache.o src/cache.cIn file 
included from src/cache.c:31:include/proto/shctx.h: In function 
`cmpxchg':include/proto/shctx.h:140: error: invalid type argument of `unary 
*'include/proto/shctx.h:140: error: invalid type argument of `unary 
*'include/proto/shctx.h:140: warning: left-hand operand of comma expression has 
no effectgmake: *** [src/cache.o] Error 1
Thanks for the help :). Anything further that I should test, or should I just 
wait for some variety of this patch to land in 1.8.13?
Regards
ThrawnOn Wednesday, 18 July 2018, 1:33:12 am AEST, Olivier Houchard 
 wrote:  
 
 Hi again,

On Tue, Jul 17, 2018 at 01:55:33PM +0200, Olivier Houchard wrote:
> Hi Lukas,
> 
> On Tue, Jul 17, 2018 at 01:08:39PM +0200, Lukas Tribus wrote:
> > On Tue, 17 Jul 2018 at 01:09, Thrawn  
> > wrote:
> > >
> > > Ah, indeed, the GCC version provided on our server is 3.4.3. But the 
> > > readme on https://github.com/haproxy/haproxy says "GCC between 2.95 and 
> > > 4.8". Can the build be changed to continue supporting older GCC, or do 
> > > the docs need an update?
> > 
> > Like I said earlier, "make clean" before compiling with different
> > parameters, like USE_THREAD=
> > 
> > Haproxy 1.8 is supposed to build fine with gcc 3 when disabling
> > threading, but if you just compiled with threads enabled, you do need
> > to "make clean":
> > 
> > 
> > > I think your gcc is just too old. Those appeared around gcc 4.1 or so.
> > 
> > With USE_THREAD= it is supposed to build fine. While threading will
> > not work with gcc 3, we did not drop support for gcc3 altogether.
> > 
> > 
> 
> Unfortunately it is not true. __sync_* was used in include/proto/shctx.h.
> The attached patch uses the haproxy macroes instead, and so should get it
> to compile again with older gcc. Thrawn, can you please test it ?
> 

After talking with Will a bit, we realized this patch might not work if
using a cache shared across multiple process.
Can you just add USE_PTHREAD_PSHARED=yes on your command line, it should do
the trick ?

Thanks !

Olivier


Re: Building HAProxy 1.8 fails on Solaris

2018-07-17 Thread Thrawn
 We always clean before building. The shell script I use is currently:

MAKE=/usr/sfw/bin/gmake$MAKE clean$MAKE USE_STATIC_PCRE=1 ARCH=native 
TARGET=solaris PCREDIR=../pcre USE_THREAD= USE_PTHREAD_PSHARED=yes
And after applying the patch (and USE_PTHREAD_PSHARED=yes as you can see), that 
builds successfully :).
Dropping 'USE_THREAD=' causes a fast failure complaining about 
'__sync_fetch_and_add' and '__sync_sub_and_fetch'. Dropping 
'USE_PTHREAD_PSHARED=yes' results in an eventual failure:
gcc -Iinclude -Iebtree -Wall  -O2 -g -fno-strict-aliasing 
-Wdeclaration-after-statement -fwrapv     -Wno-unused-label 
-fomit-frame-pointer -DFD_SETSIZE=65536 -D_REENTRANT -D_XOPEN_SOURCE=500 
-D__EXTENSIONS__      -DTPROXY -DCONFIG_HAP_CRYPT -DNEED_CRYPT_H 
-DUSE_GETADDRINFO -DENABLE_POLL -DUSE_PCRE -I../pcre/include  
-DCONFIG_HAPROXY_VERSION=\"1.8.12-8a200c7\" 
-DCONFIG_HAPROXY_DATE=\"2018/06/27\" -c -o src/cache.o src/cache.cIn file 
included from src/cache.c:31:include/proto/shctx.h: In function 
`cmpxchg':include/proto/shctx.h:140: error: invalid type argument of `unary 
*'include/proto/shctx.h:140: error: invalid type argument of `unary 
*'include/proto/shctx.h:140: warning: left-hand operand of comma expression has 
no effectgmake: *** [src/cache.o] Error 1
Thanks for the help :). Anything further that I should test, or should I just 
wait for some variety of this patch to land in 1.8.13?
Regards
ThrawnOn Wednesday, 18 July 2018, 1:33:12 am AEST, Olivier Houchard 
 wrote:  
 
 Hi again,

On Tue, Jul 17, 2018 at 01:55:33PM +0200, Olivier Houchard wrote:
> Hi Lukas,
> 
> On Tue, Jul 17, 2018 at 01:08:39PM +0200, Lukas Tribus wrote:
> > On Tue, 17 Jul 2018 at 01:09, Thrawn  
> > wrote:
> > >
> > > Ah, indeed, the GCC version provided on our server is 3.4.3. But the 
> > > readme on https://github.com/haproxy/haproxy says "GCC between 2.95 and 
> > > 4.8". Can the build be changed to continue supporting older GCC, or do 
> > > the docs need an update?
> > 
> > Like I said earlier, "make clean" before compiling with different
> > parameters, like USE_THREAD=
> > 
> > Haproxy 1.8 is supposed to build fine with gcc 3 when disabling
> > threading, but if you just compiled with threads enabled, you do need
> > to "make clean":
> > 
> > 
> > > I think your gcc is just too old. Those appeared around gcc 4.1 or so.
> > 
> > With USE_THREAD= it is supposed to build fine. While threading will
> > not work with gcc 3, we did not drop support for gcc3 altogether.
> > 
> > 
> 
> Unfortunately it is not true. __sync_* was used in include/proto/shctx.h.
> The attached patch uses the haproxy macroes instead, and so should get it
> to compile again with older gcc. Thrawn, can you please test it ?
> 

After talking with Will a bit, we realized this patch might not work if
using a cache shared across multiple process.
Can you just add USE_PTHREAD_PSHARED=yes on your command line, it should do
the trick ?

Thanks !

Olivier
  

Re: Setting up per-domain logging with haproxy

2018-07-17 Thread Shawn Heisey
On 7/17/2018 2:17 PM, Jonathan Matthews wrote:
> That's *entirely* your local syslog daemon's responsibility -
> configure it appropriately, and it'll do what you want.

I seem to remember there being logging options to have haproxy create
logfiles directly, in addition to syslog.  But now when I look, I can't
find it, so I guess I was imagining that.

> Read the haproxy docs on this - you want to tune the "length"
> parameter: 
> http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-log
> 
> As the docs say: some syslog servers allow messages >1024, some don't.

I will have to research how to make rsyslog handle longer messages, and
try out the config in that blog post for per-host logging with rsyslog.

Thanks,
Shawn



Re: Setting up per-domain logging with haproxy

2018-07-17 Thread Jonathan Matthews
Hey Shawn,

On 17 July 2018 at 19:59, Shawn Heisey  wrote:
[snip]
> Can haproxy be configured to create multiple logfiles?  Can the filename
> of each log be controlled easily in the haproxy config?  Can I use
> dynamic info for the logfile name like the value in the Host header?

Haproxy has absolutely nothing to do with the logfile creation! It
doesn't name them, rotate them or write into them.

That's *entirely* your local syslog daemon's responsibility -
configure it appropriately, and it'll do what you want.

Here's someone from 2011 doing exactly that:
https://tehlose.wordpress.com/2011/10/10/a-log-file-for-each-virtual-host-with-haproxy-and-rsyslog/

> The *format* of the haproxy logfile is fine as it is, except that I
> would like to have more than the 1024 bytes that syslog allows.

Read the haproxy docs on this - you want to tune the "length"
parameter: http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-log

As the docs say: some syslog servers allow messages >1024, some don't.

Use one that does :-)

Cheers,
Jonathan

-- 
Jonathan Matthews
London, UK
http://www.jpluscplusm.com/contact.html



Setting up per-domain logging with haproxy

2018-07-17 Thread Shawn Heisey
I have a setup that works like this:

internet->haproxy->apache->tomcat

I have been doing some experiments where the apache server is skipped,
and traffic goes directly from haproxy to tomcat.  These experiments
have gone very well.  Removing Apache from the mix would simplify things
greatly.

I have concluded that the only useful thing we are getting out of Apache
is a logfile per virtualhost.  The Apache logs are nowhere near as
helpful for troubleshooting an individual problem as haproxy's log, but
there are certain kinds of investigations where having a logfile for
each virtualhost *IS* useful, even if it doesn't have all the info that
haproxy logs.

Can haproxy be configured to create multiple logfiles?  Can the filename
of each log be controlled easily in the haproxy config?  Can I use
dynamic info for the logfile name like the value in the Host header?

Currently, haproxy logs to syslog, and aside from log rotation, the
syslog config sends all of haproxy's logs to one logfile.

The *format* of the haproxy logfile is fine as it is, except that I
would like to have more than the 1024 bytes that syslog allows.

Thanks,
Shawn



Re: haproxy 1.8.12 / 1.9- 20180623 / stopping process hangs with threads (100% cpu) on -sf reload / FreeBSD

2018-07-17 Thread PiBa-NL

Hi Christopher,

Op 17-7-2018 om 10:09 schreef Christopher Faulet:

Could you try to revert the following commit please ?

 * ba86c6c25 MINOR: threads: Be sure to remove threads from 
all_threads_mask on exit


Without this specific commit the termination of the old process works 
'properly'.
That is.. for testing i used 1.9 snapshot of 20180714 and included a 
little patch to remove the 'atomic and'.. which is basically what that 
commit added..

 #ifdef USE_THREAD
-    HA_ATOMIC_AND(_threads_mask, ~tid_bit);
 if (tid > 0)
     pthread_exit(NULL);
 #endif

Also snapshot of 20180622 + 
'0461-BUG-MEDIUM-threads-Use-the-sync-point-to-che-1.9-dev0.patch' works 
okay.


Though i guess just reverting that line is not the right fix ;).

Regards,
PiBa-NL (Pieter)



Re: Building HAProxy 1.8 fails on Solaris

2018-07-17 Thread Olivier Houchard
Hi again,

On Tue, Jul 17, 2018 at 01:55:33PM +0200, Olivier Houchard wrote:
> Hi Lukas,
> 
> On Tue, Jul 17, 2018 at 01:08:39PM +0200, Lukas Tribus wrote:
> > On Tue, 17 Jul 2018 at 01:09, Thrawn  
> > wrote:
> > >
> > > Ah, indeed, the GCC version provided on our server is 3.4.3. But the 
> > > readme on https://github.com/haproxy/haproxy says "GCC between 2.95 and 
> > > 4.8". Can the build be changed to continue supporting older GCC, or do 
> > > the docs need an update?
> > 
> > Like I said earlier, "make clean" before compiling with different
> > parameters, like USE_THREAD=
> > 
> > Haproxy 1.8 is supposed to build fine with gcc 3 when disabling
> > threading, but if you just compiled with threads enabled, you do need
> > to "make clean":
> > 
> > 
> > > I think your gcc is just too old. Those appeared around gcc 4.1 or so.
> > 
> > With USE_THREAD= it is supposed to build fine. While threading will
> > not work with gcc 3, we did not drop support for gcc3 altogether.
> > 
> > 
> 
> Unfortunately it is not true. __sync_* was used in include/proto/shctx.h.
> The attached patch uses the haproxy macroes instead, and so should get it
> to compile again with older gcc. Thrawn, can you please test it ?
> 

After talking with Will a bit, we realized this patch might not work if
using a cache shared across multiple process.
Can you just add USE_PTHREAD_PSHARED=yes on your command line, it should do
the trick ?

Thanks !

Olivier



Re: BUG: Tw is negative with lua sleep

2018-07-17 Thread Patrick Hemmer
Ping?

-Patrick

On 2018/6/22 15:10, Patrick Hemmer wrote:
> When using core.msleep in lua, the %Tw metric is a negative value.
>
> For example with the following config:
> haproxy.cfg:
> global
> lua-load /tmp/haproxy.lua
>
> frontend f1
> mode http
> bind :8000
> default_backend b1
> log 127.0.0.1:1234 daemon
> log-format Ta=%Ta\ Tc=%Tc\ Td=%Td\ Th=%Th\ Ti=%Ti\ Tq=%Tq\
> TR=%TR\ Tr=%Tr\ Tt=%Tt\ Tw=%Tw
>
> backend b1
> mode http
> http-request use-service lua.foo
>
> haproxy.lua:
> core.register_service("foo", "http", function(applet)
> core.msleep(100)
> applet:set_status(200)
> applet:start_response()
> end)
>
> The log contains:
> Ta=104 Tc=0 Td=0 Th=0 Ti=0 Tq=104 TR=104 Tr=104 Tt=104 Tw=-104
>
> ^ TR also looks wrong, as it did not take 104ms to receive the full
> request.
>
> This is built from the commit before current master: d8fd2af
>
> -Patrick



Re: Bug when passing variable to mapping function

2018-07-17 Thread Emeric Brun
Hi Jarno, and thanks Lukas

On 07/16/2018 07:27 AM, Lukas Tribus wrote:
> Hello,
> 
> 
> 
> On Fri, 29 Jun 2018 at 07:15, Jarno Huuskonen  wrote:
>>
>> Hi,
>>
>> On Thu, Jun 28, Jarno Huuskonen wrote:
>>> I think this is the commit that breaks map_regm in this case:
>>> b5997f740b21ebb197e10a0f2fe9dc13163e1772 (MAJOR: threads/map: Make
>>> acls/maps thread safe).
>>>
>>> If I revert this commit from pattern.c:pattern_exec_match
>>> then the map_regm \1 backref seems to work.
>>
>> I think I found what's replacing the \000 as first char:
>> in (map.c) sample_conv_map:
>> /* In the regm case, merge the sample with the input. */
>> if ((long)private == PAT_MATCH_REGM) {
>> str = get_trash_chunk();
>> str->len = exp_replace(str->str, str->size, 
>> smp->data.u.str.str,
>>pat->data->u.str.str,
>>(regmatch_t *)smp->ctx.a[0]);
>>
>> Before call to get_trash_chunk() smp->data.u.str.str is for example
>> 'distri.com' and after get_trash_chunk() smp->data.u.str.str
>> is '\000istri.com'.
>>
>> At the moment I don't have time to dig deeper, but hopefully this
>> helps a little bit.
> 
> Thanks for the detailed analysis, relaying to Emeric ...
> 
> 
> 
> Lukas
> 

Could you try the patch in attachment? i hope it will fix the issue

R,
Emeric
>From b6d8df3387a7ff9fe781d0315b0ee1de627679cf Mon Sep 17 00:00:00 2001
From: Emeric Brun 
Date: Tue, 17 Jul 2018 09:47:07 -0400
Subject: [PATCH] BUG/MINOR: map: fix map_regm with backref

Due to a cascade of get_trash_chunk calls the sample is
corrupted when we want to read it.

The fix consist to use a temporary chunk to copy the sample
value and use it.
---
 src/map.c | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/src/map.c b/src/map.c
index 0f1b754..3d8ec20 100644
--- a/src/map.c
+++ b/src/map.c
@@ -184,10 +184,27 @@ static int sample_conv_map(const struct arg *arg_p, struct sample *smp, void *pr
 		if (pat->data) {
 			/* In the regm case, merge the sample with the input. */
 			if ((long)private == PAT_MATCH_REGM) {
+struct chunk *tmptrash;
+
+/* Copy the content of the sample because it could
+   be scratched by incoming get_trash_chunk */
+tmptrash = alloc_trash_chunk();
+if (!tmptrash)
+	return 0;
+
+tmptrash->len = smp->data.u.str.len;
+if (tmptrash->len > (tmptrash->size-1))
+	tmptrash->len = tmptrash->size-1;
+
+memcpy(tmptrash->str, smp->data.u.str.str, tmptrash->len);
+tmptrash->str[tmptrash->len] = 0;
+
 str = get_trash_chunk();
-str->len = exp_replace(str->str, str->size, smp->data.u.str.str,
+str->len = exp_replace(str->str, str->size, tmptrash->str,
    pat->data->u.str.str,
    (regmatch_t *)smp->ctx.a[0]);
+
+free_trash_chunk(tmptrash);
 if (str->len == -1)
 	return 0;
 smp->data.u.str = *str;
-- 
2.7.4



Re: Building HAProxy 1.8 fails on Solaris

2018-07-17 Thread Olivier Houchard
Hi Lukas,

On Tue, Jul 17, 2018 at 01:08:39PM +0200, Lukas Tribus wrote:
> On Tue, 17 Jul 2018 at 01:09, Thrawn  wrote:
> >
> > Ah, indeed, the GCC version provided on our server is 3.4.3. But the readme 
> > on https://github.com/haproxy/haproxy says "GCC between 2.95 and 4.8". Can 
> > the build be changed to continue supporting older GCC, or do the docs need 
> > an update?
> 
> Like I said earlier, "make clean" before compiling with different
> parameters, like USE_THREAD=
> 
> Haproxy 1.8 is supposed to build fine with gcc 3 when disabling
> threading, but if you just compiled with threads enabled, you do need
> to "make clean":
> 
> 
> > I think your gcc is just too old. Those appeared around gcc 4.1 or so.
> 
> With USE_THREAD= it is supposed to build fine. While threading will
> not work with gcc 3, we did not drop support for gcc3 altogether.
> 
> 

Unfortunately it is not true. __sync_* was used in include/proto/shctx.h.
The attached patch uses the haproxy macroes instead, and so should get it
to compile again with older gcc. Thrawn, can you please test it ?

Thanks !

Olivier
>From 91546285ceed74516f42a9e98fac14a5094133e0 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Tue, 17 Jul 2018 13:52:19 +0200
Subject: [PATCH] MINOR: shctx: Use HA_ATOMIC_* instead of using the gcc
 builtins.

When implementing atomic operations, use the HA_ATOMIC macroes provided by
hathreads.h, instead of using the (old) gcc builtins. That way it may uses
the newer builtins, or get it to compile with an old gcc that doesn't provide
any atomic builtins.
---
 include/proto/shctx.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/proto/shctx.h b/include/proto/shctx.h
index 55cb2a77..4eed582b 100644
--- a/include/proto/shctx.h
+++ b/include/proto/shctx.h
@@ -132,17 +132,18 @@ static inline unsigned char atomic_dec(unsigned int *ptr)
 #else /* if no x86_64 or i586 arch: use less optimized gcc >= 4.1 built-ins */
 static inline unsigned int xchg(unsigned int *ptr, unsigned int x)
 {
-   return __sync_lock_test_and_set(ptr, x);
+   return HA_ATOMIC_XCHG(ptr, x);
 }
 
 static inline unsigned int cmpxchg(unsigned int *ptr, unsigned int old, 
unsigned int new)
 {
-   return __sync_val_compare_and_swap(ptr, old, new);
+   HA_ATOMIC_CAS(ptr, , new);
+   return old;
 }
 
 static inline unsigned char atomic_dec(unsigned int *ptr)
 {
-   return __sync_sub_and_fetch(ptr, 1) ? 1 : 0;
+   return HA_ATOMIC_SUB(ptr, 1) ? 1 : 0;
 }
 
 #endif
-- 
2.14.3



Re: SSL: double free on reload

2018-07-17 Thread Thierry Fournier
On Tue, 17 Jul 2018 10:10:58 +0200
Willy Tarreau  wrote:

> Hi again Nenad,
> 
> On Tue, Jul 17, 2018 at 05:18:45AM +0200, Willy Tarreau wrote:
> > Hi Nenad,
> > 
> > On Tue, Jul 17, 2018 at 03:37:37AM +0200, Nenad Merdanovic wrote:
> > > Ugh, this was a long time ago. [FROM MEMORY] The element should not be
> > > duplicated as far as I can remember. The references are stored in an 
> > > ebtree
> > > in order to prevent duplication and to provide consistent view when 
> > > updated
> > > dynamically.
> > 
> > OK. Then maybe the elements are freed after scanning the ebtree as well,
> > and we're meeting the node again after it was freed. I'll run a test
> > with the memory debugging options to see if it crashes on first dereference.
> 
> OK I found it, the same keys_ref is indeed assigned to multiple bind_confs :
> 
> static int bind_parse_tls_ticket_keys(char **args, int cur_arg, struct proxy 
> *px, struct bind_conf *conf, char **err)
> {
> ...
>   keys_ref = tlskeys_ref_lookup(args[cur_arg + 1]);
>   if(keys_ref) {
>   conf->keys_ref = keys_ref;
>   return 0;
>   }
> 
> So we clearly end up with a normal double-free. I'm checking what's the best
> solution to fix this now, as we don't have flags nor any such thing in the
> bind_conf to indicate that it must or must not be freed. We can't duplicate
> the allocation either because it's used for the updates. I think the cleanest
> solution will be to add a refcount to the tls_keys_ref entry.
> 
> Then I'm proposing the attached patch which works for me and is obvious
> enough to make valgrind happy as well.
> 
> Could you guys please give it a try to confirm ? I'll then merge it.


i, the patch works for me with backport to the 1.8 version. It is on
productio stage.

Thanks,
Thierry



> Thanks,
> Willy


-- 
Thierry Fournier
Web Performance & Security Expert
m: +33 6 68 69 21 85  | e: thierry.fourn...@ozon.io
w: http://www.ozon.io/| b: http://blog.ozon.io/



Re: [PATCH] MEDIUM: proxy_protocol: Send IPv4 addresses when possible

2018-07-17 Thread Lukas Tribus
Hello Tim,


On Fri, 29 Jun 2018 at 21:00, Tim Duesterhus  wrote:
>
> This patch changes the sending side of proxy protocol to convert IP
> addresses to IPv4 when possible (and converts them IPv6 otherwise).
>
> Previously the code failed to properly provide information under
> certain circumstances:
>
> 1. haproxy is being accessed using IPv4, http-request set-src sets
>a IPv6 address.
> 2. haproxy is being accessed using IPv6, http-request set-src sets
>a IPv4 address.
> 3. haproxy listens on `::` with v4v6 and is accessed using IPv4:
>It would send a TCP6 line, instead of a proper TCP4 line, because
>the IP addresses are representing as a mapped IPv4 address internally.
>
> Once correctness of this patch has been verified it should be evaluated
> whether it should be backported, as (1) and (2) are bugs. (3) is an
> enhancement.

Thanks for this, just a comment about nr 3:

A backend may rely on v4-mapped addresses for various reason, consider
a backend that to simplify its handling of IP addresses only handles
IPv6 and expects IPv4 addresses to be mapped.
Also consider that to send native v4 addresses the admin only has to
make a small adjustment in the bind configuration.

So since this would be a breaking change, and that the admin can
easily reconfigure the bind line any time, I would advise against this
and vote for maintaining the current behavior (where the bind
configuration controls this behavior).

I assume the X-Forwarded-For header behaves similar in this regard.



cheers,
lukas



Re: [PATCH] MEDIUM: proxy_protocol: Send IPv4 addresses when possible

2018-07-17 Thread Tim Düsterhus
Willy,

Am 29.06.2018 um 20:59 schrieb Tim Duesterhus:
> This patch changes the sending side of proxy protocol to convert IP
> addresses to IPv4 when possible (and converts them IPv6 otherwise).
> 
> Previously the code failed to properly provide information under
> certain circumstances:
> 
> 1. haproxy is being accessed using IPv4, http-request set-src sets
>a IPv6 address.
> 2. haproxy is being accessed using IPv6, http-request set-src sets
>a IPv4 address.
> 3. haproxy listens on `::` with v4v6 and is accessed using IPv4:
>It would send a TCP6 line, instead of a proper TCP4 line, because
>the IP addresses are representing as a mapped IPv4 address internally.
> 
> Once correctness of this patch has been verified it should be evaluated
> whether it should be backported, as (1) and (2) are bugs. (3) is an
> enhancement.

based on the overall lack of responses I assume that you are busy. I
just want to make sure that this patch / bug report did not slip through
the cracks. A short acknowledgement that you received it would be great,
if you are currently unable to take a deeper look at it.

Best regards
Tim Düsterhus



Re: Building HAProxy 1.8 fails on Solaris

2018-07-17 Thread Lukas Tribus
On Tue, 17 Jul 2018 at 01:09, Thrawn  wrote:
>
> Ah, indeed, the GCC version provided on our server is 3.4.3. But the readme 
> on https://github.com/haproxy/haproxy says "GCC between 2.95 and 4.8". Can 
> the build be changed to continue supporting older GCC, or do the docs need an 
> update?

Like I said earlier, "make clean" before compiling with different
parameters, like USE_THREAD=

Haproxy 1.8 is supposed to build fine with gcc 3 when disabling
threading, but if you just compiled with threads enabled, you do need
to "make clean":


> I think your gcc is just too old. Those appeared around gcc 4.1 or so.

With USE_THREAD= it is supposed to build fine. While threading will
not work with gcc 3, we did not drop support for gcc3 altogether.


cheers,
Lukas



Re: url_param not matching key-only params (also testcases for fetchers)

2018-07-17 Thread Frederic Lecaille

Hello,

On 07/16/2018 08:47 PM, Robin H. Johnson wrote:

I looked in tests & reg-tests, but didn't see any clear way to add tests
for verifying that fetchers work correctly.

I think my co-worker found an edge-case on smp_fetch_url_param/smp_fetch_param.


Have a look to the attached file to test these sample fetchers.

In this reg-test file we use two servers. s1 replies "FAILED" in its 
body, s2 replies "SUCCEEDED".


s2 is used if your ACL is matched, if not this is s1 as default backend 
which is used.


I hope this will help.

Regards,

Fred.
varnishtest "something..."

feature ignore_unknown_macro

server s1 {
rxreq
txresp -body "FAILED"
} -start

server s2 {
rxreq
txresp -body "SUCCEEDED"
} -start

haproxy ha1 -conf {
defaults
mode http
timeout connect 20s
timeout client 20ms
timeout server 1s

backend b1
server s1 ${s1_addr}:${s1_port}

backend b2
server s2 ${s2_addr}:${s2_port}

frontend fe1
default_backend b1
bind "fd@${fe1}"
acl bucket_website_crud urlp(website) -m found
use_backend b2 if bucket_website_crud

} -start

client c1 -connect ${ha1_fe1_sock} {
txreq -url "/something"
rxresp
expect resp.body == "FAILED"
}

client c2 -connect ${ha1_fe1_sock} {
txreq -url "/?website=foo"
rxresp
expect resp.body == "SUCCEEDED"
}

client c1 -start
client c2 -start

client c1 -wait
client c2 -wait


How to redirect with RegEx by using a found pattern in destination

2018-07-17 Thread Jürgen Haas
Hi all,

tried several thing over the last couple of weeks and almost got there,
but now SSL connections are failing. Hope somebody can help me the get
the last piece working too.

Redirecting requests to files like `/some/path/some-name-EN-UK.pdf` to
`/EN-UK/some-sub-path` where `EN-UK` is a variable where the found
substring in the request's filename should be used as the prefix in the
destination.

I've solved this with a set of instructions:

```
acl needsredirect path_reg ^/.*[a-z][a-z]-[a-z][a-z].pdf$
acl ischina path_reg ^/.*[en\-ch|zn\-ch].pdf$
reqirep ^([^\ :]*)\ /.*([a-z][a-z]-[a-z][a-z]).pdf \1\ /\2/404 if
needsredirect !ischina
redirect scheme https code 301 if needsredirect !ischina
```

The first 2 lines determine if we have to redirect which is the case if
`needsredirect` is TRUE and `ischina` is FALSE.

The third line does the regex on all headers and the fourth is then
doing the redirect with the modified header values which is doing the
right thing if the original request had been to http://something.

My problem is line 3 in the context of a https request. HaProxy responds
with a `HTTP/1.1 400 Bad Request` and I'm stuck with this approach.

Is there a way to fix this by e.g. only modifying specific header fields
(if so, which and how) or is there even a beter way of approaching the task?

I'm very greatful for every help I can get for this problem.

Thanks
Jürgen





signature.asc
Description: OpenPGP digital signature


Re: SSL: double free on reload

2018-07-17 Thread Willy Tarreau
Hi again Nenad,

On Tue, Jul 17, 2018 at 05:18:45AM +0200, Willy Tarreau wrote:
> Hi Nenad,
> 
> On Tue, Jul 17, 2018 at 03:37:37AM +0200, Nenad Merdanovic wrote:
> > Ugh, this was a long time ago. [FROM MEMORY] The element should not be
> > duplicated as far as I can remember. The references are stored in an ebtree
> > in order to prevent duplication and to provide consistent view when updated
> > dynamically.
> 
> OK. Then maybe the elements are freed after scanning the ebtree as well,
> and we're meeting the node again after it was freed. I'll run a test
> with the memory debugging options to see if it crashes on first dereference.

OK I found it, the same keys_ref is indeed assigned to multiple bind_confs :

static int bind_parse_tls_ticket_keys(char **args, int cur_arg, struct proxy 
*px, struct bind_conf *conf, char **err)
{
...
keys_ref = tlskeys_ref_lookup(args[cur_arg + 1]);
if(keys_ref) {
conf->keys_ref = keys_ref;
return 0;
}

So we clearly end up with a normal double-free. I'm checking what's the best
solution to fix this now, as we don't have flags nor any such thing in the
bind_conf to indicate that it must or must not be freed. We can't duplicate
the allocation either because it's used for the updates. I think the cleanest
solution will be to add a refcount to the tls_keys_ref entry.

Then I'm proposing the attached patch which works for me and is obvious
enough to make valgrind happy as well.

Could you guys please give it a try to confirm ? I'll then merge it.

Thanks,
Willy
>From 50588a4e3ccc92cdb0c8a347193192030f0ea9f0 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Tue, 17 Jul 2018 10:05:32 +0200
Subject: BUG/MINOR: ssl: properly ref-count the tls_keys entries

Commit 200b0fa ("MEDIUM: Add support for updating TLS ticket keys via
socket") introduced support for updating TLS ticket keys from the CLI,
but missed a small corner case : if multiple bind lines reference the
same tls_keys file, the same reference is used (as expected), but during
the clean shutdown, it will lead to a double free when destroying the
bind_conf contexts since none of the lines knows if others still use
it. The impact is very low however, mostly a core and/or a message in
the system's log upon old process termination.

Let's introduce some basic refcounting to prevent this from happening,
so that only the last bind_conf frees it.

Thanks to Janusz Dziemidowicz and Thierry Fournier for both reporting
the same issue with an easy reproducer.

This fix needs to be backported from 1.6 to 1.8.
---
 include/types/ssl_sock.h | 1 +
 src/ssl_sock.c   | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/types/ssl_sock.h b/include/types/ssl_sock.h
index 8a5b3ee..2e02631 100644
--- a/include/types/ssl_sock.h
+++ b/include/types/ssl_sock.h
@@ -59,6 +59,7 @@ struct tls_keys_ref {
struct list list; /* Used to chain refs. */
char *filename;
int unique_id; /* Each pattern reference have unique id. */
+   int refcount;  /* number of users of this tls_keys_ref. */
struct tls_sess_key *tlskeys;
int tls_ticket_enc_index;
__decl_hathreads(HA_RWLOCK_T lock); /* lock used to protect the ref */
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index b5547cc..3f70b12 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -4823,7 +4823,7 @@ void ssl_sock_destroy_bind_conf(struct bind_conf 
*bind_conf)
ssl_sock_free_ssl_conf(_conf->ssl_conf);
free(bind_conf->ca_sign_file);
free(bind_conf->ca_sign_pass);
-   if (bind_conf->keys_ref) {
+   if (bind_conf->keys_ref && !--bind_conf->keys_ref->refcount) {
free(bind_conf->keys_ref->filename);
free(bind_conf->keys_ref->tlskeys);
LIST_DEL(_conf->keys_ref->list);
@@ -7672,7 +7672,8 @@ static int bind_parse_tls_ticket_keys(char **args, int 
cur_arg, struct proxy *px
}
 
keys_ref = tlskeys_ref_lookup(args[cur_arg + 1]);
-   if(keys_ref) {
+   if (keys_ref) {
+   keys_ref->refcount++;
conf->keys_ref = keys_ref;
return 0;
}
@@ -7719,6 +7720,7 @@ static int bind_parse_tls_ticket_keys(char **args, int 
cur_arg, struct proxy *px
i -= 2;
keys_ref->tls_ticket_enc_index = i < 0 ? 0 : i % TLS_TICKETS_NO;
keys_ref->unique_id = -1;
+   keys_ref->refcount = 1;
HA_RWLOCK_INIT(_ref->lock);
conf->keys_ref = keys_ref;
 
-- 
1.7.12.1



Re: haproxy 1.8.12 / 1.9- 20180623 / stopping process hangs with threads (100% cpu) on -sf reload / FreeBSD

2018-07-17 Thread Christopher Faulet

Le 17/07/2018 à 01:08, PiBa-NL a écrit :

Hi List,

With a build of 1.8.12 (and the 1.9 snapshot of 20180623 ) im getting 
the 'old' haproxy process take up 100% cpu usage when using 3 threads in 
the config and reloading with -sf parameter. I'm using FreeBSD.. (It 
also happens with the 14-7 snapshot.)


It seems to happen after 1 thread quits, one of the others gets out of 
control.

Most of the time it happens after the first reload.:
haproxy -f /var/etc/haproxy/haproxy.cfg -D
haproxy -f /var/etc/haproxy/haproxy.cfg -D -sf 19110

The main features i use are:  ssl offloading / lua / threads
Only a little to no traffic passing through though, im seeing this 
behavior also on my in-active production node, the strange part sofar 
though is that i could not reproduce it yet on my test machine.
If someone has got a idea on how to patch or what direction to search 
for a fix i'm happy to try.


If there is nothing obvious that can be spotted with the info from the 
stack-traces of both 1.8 and 1.9 below ill try and dig further tomorrow 
:).Thanks in advance for anyone's time :).


Hi Pieter,

This is a problem with the sync point... again. Could you try to revert 
the following commit please ?


 * ba86c6c25 MINOR: threads: Be sure to remove threads from 
all_threads_mask on exit


This fix was probably added too quickly. I suspect a problem when a 
thread exits, removing its tid_bit from all_threads_mask while some 
others are looping in the sync point.


Regards,
--
Christopher Faulet