Re: haproxy segfault

2019-02-13 Thread Georg Faerber
On 19-02-13 16:27:21, Hugues Alary wrote:
> (Also, I've been looking for commit 451c5a88 and can't find it
> anywhere).

See http://git.haproxy.org/?p=haproxy-1.9.git;a=commit;h=451c5a88, also
attached.

Cheers,
Georg
From 451c5a8879a9d59b489ad5117c984044d41c8338 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Sun, 10 Feb 2019 18:49:37 +0100
Subject: [PATCH] BUG/MAJOR: stream: avoid double free on unique_id

Commit 32211a1 ("BUG/MEDIUM: stream: Don't forget to free
s->unique_id in stream_free().") addressed a memory leak but in
exchange may cause double-free due to the fact that after freeing
s->unique_id it doesn't null it and then calls http_end_txn()
which frees it again. Thus the process quickly crashes at runtime.

This fix must be backported to all stable branches where the
aforementioned patch was backported.

(cherry picked from commit 09c4bab41188c13e7a9227f8baaff230ebdd0875)
Signed-off-by: Willy Tarreau 
---
 src/stream.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/src/stream.c b/src/stream.c
index a96ddcb..df778b1 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -387,6 +387,7 @@ static void stream_free(struct stream *s)
 	}
 
 	pool_free(pool_head_uniqueid, s->unique_id);
+	s->unique_id = NULL;
 
 	hlua_ctx_destroy(s->hlua);
 	s->hlua = NULL;
-- 
1.7.10.4


signature.asc
Description: Digital signature


Re: haproxy segfault

2019-02-13 Thread Lukas Tribus
Hi Hugues,


On Thursday, 14 February 2019, Hugues Alary  wrote:

> Hi,
>
> I am also running into this issue using 1.9.4 (i.e. the current "latest"
> docker image) with absolutely no load at all (1 client):
>
> [ALERT] 044/001000 (1) : sendmsg()/writev() failed in logger #1: No such
> file or directory (errno=2)
> [NOTICE] 044/001000 (1) : New worker #1 (8) forked
> [ALERT] 044/001011 (1) : Current worker #1 (8) exited with code 139
> (Segmentation fault)
> [ALERT] 044/001011 (1) : exit-on-failure: killing every workers with
> SIGTERM
> [WARNING] 044/001011 (1) : All workers exited. Exiting... (139)
>
> I do not have this issue with 1.8.
>
> (Also, I've been looking for commit 451c5a88 and can't find it anywhere).
>

It's in 1.9:

http://git.haproxy.org/?p=haproxy-1.9.git;a=patch;h=451c5a8879a9d59b489ad5117c984044d41c8338

lukas


Re: haproxy segfault

2019-02-13 Thread Hugues Alary
Hi,

I am also running into this issue using 1.9.4 (i.e. the current "latest"
docker image) with absolutely no load at all (1 client):

[ALERT] 044/001000 (1) : sendmsg()/writev() failed in logger #1: No such
file or directory (errno=2)
[NOTICE] 044/001000 (1) : New worker #1 (8) forked
[ALERT] 044/001011 (1) : Current worker #1 (8) exited with code 139
(Segmentation fault)
[ALERT] 044/001011 (1) : exit-on-failure: killing every workers with SIGTERM
[WARNING] 044/001011 (1) : All workers exited. Exiting... (139)

I do not have this issue with 1.8.

(Also, I've been looking for commit 451c5a88 and can't find it anywhere).

Cheers,
-Hugues

On Wed, Feb 13, 2019 at 2:15 AM Christopher Faulet 
wrote:

> Le 13/02/2019 à 09:40, m...@mildis.org a écrit :
> > Thanks Vincent, got a core dump.
> >
> > Here is the backtrace
> >
> > #0  0x5577fb060375 in __pool_get_from_cache (pool=0x5577fb3f7540
> > ) at include/common/memory.h:199
> > 199include/common/memory.h: No such file or directory.
> >
> > (gdb) bt full
> > #0  0x5577fb060375 in __pool_get_from_cache (pool=0x5577fb3f7540
> > ) at include/common/memory.h:199
> >  __ret = 0x5577fb583280
> >  item = 0x5577fb583280
> > #1  __pool_get_first (pool=0x5577fb3f7540 ) at
> > include/common/memory.h:216
> >  cmp = 
> > #2  pool_alloc_dirty (pool=0x5577fb3f7540 ) at
> > include/common/memory.h:258
> >  p = 0x5577fb583280
> > #3  pool_alloc (pool=0x5577fb3f7540 ) at
> > include/common/memory.h:272
> > No locals.
> > #4  http_process_request (s=0x5577fb580d50, req=,
> > an_bit=2048) at src/proto_http.c:2880
> >  sess = 0x5577fb57e7a0
> >  txn = 
> >  msg = 
>
> So I confirm, it is a crash during the allocation of s->unique_id. The
> commit 451c5a88 fixes this bug.
>
>
> --
> Christopher Faulet
>
>


Re: High p99 latency with HAProxy 1.9 in http mode compared to 1.8

2019-02-13 Thread Ashwin Neerabail
Hi Willy,

Thank you for the detailed response. Sorry for the delay in response.

I ran all the combinations multiple times  to ensure consistent
reproducibility.
Here is what i found :

Test Setup (same as last time):
2 Kube pods one running Haproxy 1.8.17 and another running
1.9.2  loadbalancing across 2 backend pods.
Haproxy container is given 1 CPU , 1 GB Memory.
500 rps per pod test , latencies calculated for 1 min window.

- previous results for comparison
* Haproxy 1.9 - p99 is ~ 20ms , p95 is ~ 11ms , median is 5.5ms
* Haproxy 1.8 - p99 is ~ 8ms , p95 is ~ 6ms, median is 4.5ms
* Haproxy 1.9 : Memory usage - 130MB , CPU : 55% util
* Haproxy 1.8 : Memory usage - 90MB , CPU : 45% util

 - without SSL
HAProxy 1.8 performs slightly better than 1.9
* Haproxy 1.9 - p99 is ~ 9ms , p95 is ~ 3.5ms , median is 2.3ms*
* Haproxy 1.8 - p99 is ~ 5ms , p95 is ~ 2.5ms, median is 1.7ms*
CPU Usage is identical. (0.1% CPU)

- by disabling server-side idle connections (using "pool-max-conn 0" on
 the server) though "http-reuse never" should be equivalent

This seems to have done the trick. Adding `pool-max-conn 0` or `http-reuse
never` fixes the problem.
1.8 and 1.9 perform similarly (client app that calls haproxy is using
connection pooling). *Unfortunately , we have legacy clients that close
connections to front end for every request.*
CPU Usage for 1.8 and 1.9 was same ~22%.

   - by placing an inconditional redirect rule in your backend so that we
 check how it performs when the connection doesn't leave :
 http-request redirect location /

Tried adding monitor-uri and returning from remote haproxy rather than
hitting backend server.
Strangely , in this case I see nearly identical performance /CPU usage with
1.8 and 1.9 even with http reuse set to aggressive.
CPU Usage for 1.8 and 1.9 was same ~35%.
*Set up is Client > HAProxy > HAProxy (with monitor-uri) > Server.*

If you're running harmless tests, you can pick the latest nightly snapshot
of 2.0-dev which is very close to what 1.9.4 will be.
I also tried the perf tests with 2.0-dev. It shows the same behavior as 1.9.

If you have potential fixes / settings / other debugging steps that can be
tweaked - I can test them out and publish the results.
Thanks for your help.

-Ashwin


On Thu, Jan 31, 2019 at 1:43 PM Willy Tarreau  wrote:

> Hi Ashwin,
>
> On Thu, Jan 31, 2019 at 10:32:33AM -0800, Ashwin Neerabail wrote:
> > Hi,
> >
> > We are in process of upgrading to HAProxy 1.9 and we are seeing
> consistent
> > high latency with HAProxy 1.9.2 as compared to 1.8.17 when using HTTP
> Mode
> > ( both with and without TLS). However no latency issues with TCP Mode.
> >
> > Test Setup:
> > 2 Kube pods one running Haproxy 1.8.17 and another running 1.9.2
> > loadbalancing across 2 backend pods.
> > Haproxy container is given 1 CPU , 1 GB Memory.
> > 500 rps per pod test , latencies calculated for 1 min window.
> >
> > Latencies as measured by client:
> >
> > *When running TCP Mode , the p99 latency between 1.9 and 1.8 is the
> same.*
> > *When running HTTP Mode (with TLS),*
> > *Haproxy 1.9 - p99 is ~ 20ms , p95 is ~ 11ms , median is 5.5ms*
> > *Haproxy 1.8 - p99 is ~ 8ms , p95 is ~ 6ms, median is 4.5ms*
>
> The difference is huge, I'm wondering if it could be caused by a last TCP
> segment being sent 40ms too late once in a while. Otherwise I'm having a
> hard time imagining what can take so long a time at 500 Rps!
>
> In case you can vary some test parameters to try to narrow this down, it
> would be interesting to try again :
>- without SSL
>- by disabling server-side idle connections (using "pool-max-conn 0" on
>  the server) though "http-reuse never" should be equivalent
>- by placing an inconditional redirect rule in your backend so that we
>  check how it performs when the connection doesn't leave :
>  http-request redirect location /
>
> > This increased latency is reproducible across multiple runs with 100%
> > consistency.
> > Haproxy reported metrics for connections and requests are the same for
> both
> > 1.8 and 1.9.
> >
> > Haproxy 1.9 : Memory usage - 130MB , CPU : 55% util
> > Haproxy 1.8 : Memory usage - 90MB , CPU : 45% util
>
> That's quite interesting, it could indicate some excessive SSL
> renegotiations. Regarding the extra RAM, I have no idea though. It could
> be the result of a leak though.
>
> Trying 1.9.3 would obviously help, since it fixes a number of isses, even
> if at first glance I'm not spotting one which could explain this. And I'd
> be interested in another attempt once 1.9.4 is ready since it fixes many
> backend-side connection issues. If you're running harmless tests, you can
> pick the latest nightly snapshot of 2.0-dev which is very close to what
> 1.9.4 will be. But already, testing the points above to bisect the issues
> will help.
>
> > Please let me know if I can provide any more details on this.
>
> In 1.9 we also have the ability to watch more details (per-connection
> CPU timing, 

Re: Compilation fails on OS-X

2019-02-13 Thread Patrick Hemmer


On 2019/2/13 10:29, Olivier Houchard wrote:
> Hi Patrick,
>
> On Wed, Feb 13, 2019 at 10:01:01AM -0500, Patrick Hemmer wrote:
>>
>> On 2019/2/13 09:40, Aleksandar Lazic wrote:
>>> Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
 Trying to compile haproxy on my local machine for testing purposes and am
 running into the following:
>>> Which compiler do you use?
>>  # gcc -v
>>  Configured with: 
>> --prefix=/Applications/Xcode.app/Contents/Developer/usr 
>> --with-gxx-include-dir=/usr/include/c++/4.2.1
>>  Apple LLVM version 9.0.0 (clang-900.0.39.2)
>>  Target: x86_64-apple-darwin17.7.0
>>  Thread model: posix
>>  InstalledDir: 
>> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
>>
 # make TARGET=osx
 src/proto_http.c:293:1: error: argument to 'section' attribute is 
 not
 valid for this target: mach-o section specifier requires a segment and 
 section
 separated by a comma
 DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct 
 http_txn));
 ^
 include/common/memory.h:128:2: note: expanded from macro 
 'DECLARE_POOL'
 REGISTER_POOL(, name, size)
 ^
 include/common/memory.h:123:2: note: expanded from macro 
 'REGISTER_POOL'
 INITCALL3(STG_POOL, create_pool_callback, (ptr), 
 (name),
 (size))
 ^
 include/common/initcall.h:102:2: note: expanded from macro 
 'INITCALL3'
 _DECLARE_INITCALL(stage, __LINE__, function, arg1, 
 arg2,
 arg3)
 ^
 include/common/initcall.h:78:2: note: expanded from macro
 '_DECLARE_INITCALL'
 __DECLARE_INITCALL(__VA_ARGS__)
 ^
 include/common/initcall.h:65:42: note: expanded from macro
 '__DECLARE_INITCALL'

 __attribute__((__used__,__section__("init_"#stg))) =   \



 Issue occurs on master, and the 1.9 branch

 -Patrick
> Does the (totally untested, because I have no Mac to test) patch works for
> you ?

Unfortunately not. Just introduces a lot of new errors:


In file included from src/ev_poll.c:22:
In file included from include/common/hathreads.h:26:
include/common/initcall.h:134:22: error: expected ')'
DECLARE_INIT_SECTION(STG_PREPARE);
 ^
include/common/initcall.h:134:1: note: to match this '('
DECLARE_INIT_SECTION(STG_PREPARE);
^
include/common/initcall.h:124:82: note: expanded from macro
'DECLARE_INIT_SECTION'
extern __attribute__((__weak__)) const struct
initcall *__start_init_##stg __asm("section$start$__DATA$" stg); \

   
^

-Patrick


Re: Compilation fails on OS-X

2019-02-13 Thread Tim Düsterhus
Olivier,

Am 13.02.19 um 16:29 schrieb Olivier Houchard:
> Does the (totally untested, because I have no Mac to test) patch works for
> you ?
> 

Note: This was also reported in the bug tracker. Can you add a "see
issue #42" to the message of the final patch?

see: https://github.com/haproxy/haproxy/issues/42

Best regards
Tim Düsterhus



[RFC PATCH] MEDIUM: compression: Add support for brotli compression

2019-02-13 Thread Tim Duesterhus
Willy,
Aleks,
List,

this (absolutely non-ready-to-merge) patch adds support for brotli
compression as suggested in issue #21: 
https://github.com/haproxy/haproxy/issues/21

It is tested on Ubuntu Xenial with libbrotli 1.0.3:

[timwolla@~]apt-cache policy libbrotli-dev
libbrotli-dev:
Installed: 1.0.3-1ubuntu1~16.04.1
Candidate: 1.0.3-1ubuntu1~16.04.1
Version table:
*** 1.0.3-1ubuntu1~16.04.1 500
500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main 
amd64 Packages
100 /var/lib/dpkg/status
[timwolla@~]apt-cache policy libbrotli1
libbrotli1:
Installed: 1.0.3-1ubuntu1~16.04.1
Candidate: 1.0.3-1ubuntu1~16.04.1
Version table:
*** 1.0.3-1ubuntu1~16.04.1 500
500 http://de.archive.ubuntu.com/ubuntu xenial-updates/main 
amd64 Packages
100 /var/lib/dpkg/status

I am successfully able access brotli compressed URLs with Google Chrome,
this requires me to disable `gzip` though (because haproxy prefers to
select gzip, I suspect because `br` is last in Chrome's `Accept-Encoding`
header).

I also am able to sucessfully download and decompress URLs with `curl`
and the `brotli` CLI utility. The server I use as the backend for these
tests has about 45ms RTT to my machine. The HTML page I use is some random
HTML page on the server, the noise file is 1 MiB of finest /dev/urandom.

You'll notice that brotli compressed requests are both faster as well as
smaller compared to gzip with the hardcoded brotli compression quality
of 3. The default is 11, which is *way* slower than gzip.

+ curl localhost:8080/*snip*.html -H 'Accept-Encoding: gzip'
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
Dload  Upload   Total   SpentLeft  
Speed
100 492800 492800 0   279k  0 --:--:-- --:--:-- 
--:--:--  279k
+ curl localhost:8080/*snip*.html -H 'Accept-Encoding: br'
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
Dload  Upload   Total   SpentLeft  
Speed
100 434010 434010 0   332k  0 --:--:-- --:--:-- 
--:--:--  333k
+ curl localhost:8080/*snip*.html -H 'Accept-Encoding: identity'
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
Dload  Upload   Total   SpentLeft  
Speed
100  127k  100  127k0 0   441k  0 --:--:-- --:--:-- 
--:--:--  441k
+ curl localhost:8080/noise -H 'Accept-Encoding: gzip'
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
Dload  Upload   Total   SpentLeft  
Speed
100 1025k0 1025k0 0  3330k  0 --:--:-- --:--:-- 
--:--:-- 3338k
+ curl localhost:8080/noise -H 'Accept-Encoding: br'
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
Dload  Upload   Total   SpentLeft  
Speed
100 1024k0 1024k0 0  3029k  0 --:--:-- --:--:-- 
--:--:-- 3030k
+ curl localhost:8080/noise -H 'Accept-Encoding: identity'
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
Dload  Upload   Total   SpentLeft  
Speed
100 1024k  100 1024k0 0  3003k  0 --:--:-- --:--:-- 
--:--:-- 3002k
+ ls -al
total 3384
drwxrwxr-x  2 timwolla timwolla4096 Feb 13 17:30 .
drwxrwxrwt 28 root root   69632 Feb 13 17:25 ..
-rw-rw-r--  1 timwolla timwolla 598 Feb 13 17:30 download
-rw-rw-r--  1 timwolla timwolla   43401 Feb 13 17:30 html-br
-rw-rw-r--  1 timwolla timwolla   49280 Feb 13 17:30 html-gz
-rw-rw-r--  1 timwolla timwolla  130334 Feb 13 17:30 html-id
-rw-rw-r--  1 timwolla timwolla 1048949 Feb 13 17:30 noise-br
-rw-rw-r--  1 timwolla timwolla 1049666 Feb 13 17:30 noise-gz
-rw-rw-r--  1 timwolla timwolla 1048576 Feb 13 17:30 noise-id
++ zcat html-gz
+ sha256sum html-id /dev/fd/63 /dev/fd/62
++ brotli --decompress --stdout html-br
56f1664241b3dbb750f93b69570be76c6baccb8de4f3a62fb4fec0ce1bf440b5  
html-id
56f1664241b3dbb750f93b69570be76c6baccb8de4f3a62fb4fec0ce1bf440b5  
/dev/fd/63
56f1664241b3dbb750f93b69570be76c6baccb8de4f3a62fb4fec0ce1bf440b5  
/dev/fd/62
++ zcat noise-gz
+ sha256sum noise-id /dev/fd/63 /dev/fd/62
++ brotli --decompress --stdout noise-br
ab23236d9d4acecec239c3f0f9b59e59dd043267eeed9ed723da8b15f46bbf33  
noise-id
ab23236d9d4acecec239c3f0f9b59e59dd043267eeed9ed723da8b15f46bbf33  
/dev/fd/63

Re: Compilation fails on OS-X

2019-02-13 Thread Olivier Houchard
Hi Patrick,

On Wed, Feb 13, 2019 at 10:01:01AM -0500, Patrick Hemmer wrote:
> 
> 
> On 2019/2/13 09:40, Aleksandar Lazic wrote:
> > Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
> >> Trying to compile haproxy on my local machine for testing purposes and am
> >> running into the following:
> > Which compiler do you use?
> 
>   # gcc -v
>   Configured with: 
> --prefix=/Applications/Xcode.app/Contents/Developer/usr 
> --with-gxx-include-dir=/usr/include/c++/4.2.1
>   Apple LLVM version 9.0.0 (clang-900.0.39.2)
>   Target: x86_64-apple-darwin17.7.0
>   Thread model: posix
>   InstalledDir: 
> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
> 
> >> # make TARGET=osx
> >> src/proto_http.c:293:1: error: argument to 'section' attribute is 
> >> not
> >> valid for this target: mach-o section specifier requires a segment and 
> >> section
> >> separated by a comma
> >> DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct 
> >> http_txn));
> >> ^
> >> include/common/memory.h:128:2: note: expanded from macro 
> >> 'DECLARE_POOL'
> >> REGISTER_POOL(, name, size)
> >> ^
> >> include/common/memory.h:123:2: note: expanded from macro 
> >> 'REGISTER_POOL'
> >> INITCALL3(STG_POOL, create_pool_callback, (ptr), 
> >> (name),
> >> (size))
> >> ^
> >> include/common/initcall.h:102:2: note: expanded from macro 
> >> 'INITCALL3'
> >> _DECLARE_INITCALL(stage, __LINE__, function, arg1, 
> >> arg2,
> >> arg3)
> >> ^
> >> include/common/initcall.h:78:2: note: expanded from macro
> >> '_DECLARE_INITCALL'
> >> __DECLARE_INITCALL(__VA_ARGS__)
> >> ^
> >> include/common/initcall.h:65:42: note: expanded from macro
> >> '__DECLARE_INITCALL'
> >>
> >> __attribute__((__used__,__section__("init_"#stg))) =   \
> >>
> >>
> >>
> >> Issue occurs on master, and the 1.9 branch
> >>
> >> -Patrick
> 

Does the (totally untested, because I have no Mac to test) patch works for
you ?

Thanks !

Olivier
>From 2f68108ae32a66782b4c7518f8830896732be64d Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Wed, 13 Feb 2019 16:22:17 +0100
Subject: [PATCH] BUILD/MEDIUM: initcall: Fix build on MacOS.

MacOS syntax for sections is a bit different, so implement it.

This should be backported to 1.9.
---
 include/common/initcall.h | 15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/include/common/initcall.h b/include/common/initcall.h
index 35d237fd..95e488ba 100644
--- a/include/common/initcall.h
+++ b/include/common/initcall.h
@@ -50,6 +50,12 @@ struct initcall {
void *arg3;
 };
 
+#ifdef __APPLE__
+#define HA_SECTION(s) __section__("__DATA, " s)
+#else
+#define HA_SECTION(s) __section__((s))
+#endif
+
 /* Declare a static variable in the init section dedicated to stage ,
  * with an element referencing function  and arguments .
  *  is needed to deduplicate entries created from a same file. The
@@ -62,7 +68,7 @@ struct initcall {
  */
 #define __DECLARE_INITCALL(stg, linenum, function, a1, a2, a3) \
 static const struct initcall *__initcb_##linenum   \
-   __attribute__((__used__,__section__("init_"#stg))) =   \
+   __attribute__((__used__,HA_SECTION("init_"#stg))) =\
(stg < STG_SIZE) ? &(const struct initcall) {  \
.fct = (void (*)(void *,void *,void *))function,   \
.arg1 = (void *)(a1),  \
@@ -113,9 +119,16 @@ struct initcall {
  * empty. The corresponding sections must contain exclusively pointers to
  * make sure each location may safely be visited by incrementing a pointer.
  */
+#ifdef __APPLE__
+#define DECLARE_INIT_SECTION(stg)  
 \
+   extern __attribute__((__weak__)) const struct initcall 
*__start_init_##stg __asm("section$start$__DATA$" stg); \
+   extern __attribute__((__weak__)) const struct initcall 
*__stop_init_##stg __asm("section$end$__DATA$" stg)
+
+#else
 #define DECLARE_INIT_SECTION(stg)  
 \
extern __attribute__((__weak__)) const struct initcall 
*__start_init_##stg; \
extern __attribute__((__weak__)) const struct initcall 
*__stop_init_##stg
+#endif
 
 /* Declare all initcall sections here */
 DECLARE_INIT_SECTION(STG_PREPARE);
-- 
2.20.1



Re: Compilation fails on OS-X

2019-02-13 Thread Patrick Hemmer


On 2019/2/13 09:40, Aleksandar Lazic wrote:
> Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
>> Trying to compile haproxy on my local machine for testing purposes and am
>> running into the following:
> Which compiler do you use?

# gcc -v
Configured with: 
--prefix=/Applications/Xcode.app/Contents/Developer/usr 
--with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

>> # make TARGET=osx
>> src/proto_http.c:293:1: error: argument to 'section' attribute is not
>> valid for this target: mach-o section specifier requires a segment and 
>> section
>> separated by a comma
>> DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct 
>> http_txn));
>> ^
>> include/common/memory.h:128:2: note: expanded from macro 
>> 'DECLARE_POOL'
>> REGISTER_POOL(, name, size)
>> ^
>> include/common/memory.h:123:2: note: expanded from macro 
>> 'REGISTER_POOL'
>> INITCALL3(STG_POOL, create_pool_callback, (ptr), 
>> (name),
>> (size))
>> ^
>> include/common/initcall.h:102:2: note: expanded from macro 
>> 'INITCALL3'
>> _DECLARE_INITCALL(stage, __LINE__, function, arg1, 
>> arg2,
>> arg3)
>> ^
>> include/common/initcall.h:78:2: note: expanded from macro
>> '_DECLARE_INITCALL'
>> __DECLARE_INITCALL(__VA_ARGS__)
>> ^
>> include/common/initcall.h:65:42: note: expanded from macro
>> '__DECLARE_INITCALL'
>>
>> __attribute__((__used__,__section__("init_"#stg))) =   \
>>
>>
>>
>> Issue occurs on master, and the 1.9 branch
>>
>> -Patrick



Re: Compilation fails on OS-X

2019-02-13 Thread Aleksandar Lazic
Am 13.02.2019 um 14:45 schrieb Patrick Hemmer:
> Trying to compile haproxy on my local machine for testing purposes and am
> running into the following:

Which compiler do you use?

>         # make TARGET=osx
>     src/proto_http.c:293:1: error: argument to 'section' attribute is not
> valid for this target: mach-o section specifier requires a segment and section
> separated by a comma
>         DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct http_txn));
>         ^
>         include/common/memory.h:128:2: note: expanded from macro 
> 'DECLARE_POOL'
>                         REGISTER_POOL(, name, size)
>                         ^
>         include/common/memory.h:123:2: note: expanded from macro 
> 'REGISTER_POOL'
>                         INITCALL3(STG_POOL, create_pool_callback, (ptr), 
> (name),
> (size))
>                         ^
>         include/common/initcall.h:102:2: note: expanded from macro 'INITCALL3'
>                         _DECLARE_INITCALL(stage, __LINE__, function, arg1, 
> arg2,
> arg3)
>                         ^
>         include/common/initcall.h:78:2: note: expanded from macro
> '_DECLARE_INITCALL'
>                         __DECLARE_INITCALL(__VA_ARGS__)
>                         ^
>         include/common/initcall.h:65:42: note: expanded from macro
> '__DECLARE_INITCALL'
>                                
> __attribute__((__used__,__section__("init_"#stg))) =   \
> 
> 
> 
> Issue occurs on master, and the 1.9 branch
> 
> -Patrick




Compilation fails on OS-X

2019-02-13 Thread Patrick Hemmer
Trying to compile haproxy on my local machine for testing purposes and
am running into the following:

# make TARGET=osx
src/proto_http.c:293:1: error: argument to 'section' attribute
is not valid for this target: mach-o section specifier requires a
segment and section separated by a comma
DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct
http_txn));
^
include/common/memory.h:128:2: note: expanded from macro
'DECLARE_POOL'
REGISTER_POOL(, name, size)
^
include/common/memory.h:123:2: note: expanded from macro
'REGISTER_POOL'
INITCALL3(STG_POOL, create_pool_callback, (ptr),
(name), (size))
^
include/common/initcall.h:102:2: note: expanded from macro
'INITCALL3'
_DECLARE_INITCALL(stage, __LINE__, function,
arg1, arg2, arg3)
^
include/common/initcall.h:78:2: note: expanded from macro
'_DECLARE_INITCALL'
__DECLARE_INITCALL(__VA_ARGS__)
^
include/common/initcall.h:65:42: note: expanded from macro
'__DECLARE_INITCALL'
   
__attribute__((__used__,__section__("init_"#stg))) =   \



Issue occurs on master, and the 1.9 branch

-Patrick


Re: http-use-htx and IIS

2019-02-13 Thread Christopher Faulet

Le 08/02/2019 à 15:55, Willy Tarreau a écrit :

Hi Marco,

On Fri, Feb 08, 2019 at 02:20:53PM +0100, Marco Corte wrote:

Il 2019-02-07 17:50 Marco Corte ha scritto:

Hello!

I am testing haproxy version 1.9.4 on Ubuntu 18.04.

With the "option http-use-htx", haproxy shows a strange behaviour when
the real server is IIS and if the users' browsers try to do a POST.



I activated two frontend/backend pair on the same haproxy instance,
forwarding to the same real server 10.64.44.74:82.

bind 10.64.44.112:443 -> no option http-use-htx -> server 10.64.44.74:82
bind 10.64.44.112:444 ->option http-use-htx -> server 10.64.44.74:82


(..)

Two minutes after the POST, the real server logs a "400" error (because a
timeout is reached, I guess).
The fact that the real server is waiting for some data also matches with the
haproxy logs that have a "SD" state at disconnection.

It is difficult to anonymize the packet content and I do not want to
generate a WOT here posting the whole packet capture in clear.
If someone is interested, I can do a tcpdump and sent it to him/her.


Could you please give a few extra indications like :
   - the approximate size of the POST request
   - the approximate size of the response (if any)
   - the request headers haproxy sends to IIS
   - the response headers haproxy receives to IIS

You can run haproxy in debug mode (-d) and you'll get all these at once,
it will significantly help figure where to search.



Hi,

Just for the record. I worked on this issue with Marco off-list. And a 
fix was merged and backported to 1.9. For details, see


  git.haproxy.org/?p=haproxy.git;a=commit;h=6cdaf2ad


--
Christopher Faulet



Re: error in haproxy 1.9 using txn.req:send in lua

2019-02-13 Thread Laurent Penot
Hi Christopher,

I'm so sad
It was really working well in my use case with 1.8 versions.
Thank's a lot for your answer

Best
Laurent



On 13/02/2019 10:56, "Christopher Faulet"  wrote:

Le 13/02/2019 à 09:34, Laurent Penot a écrit :
> Hi Thierry, guys,
> 
> When receiving a POST request on haproxy, I use lua to compute some 
> values, and modify the body of the request before forwarding to the 
> backend, so my backend can get these variables from the POST and use them.
> 
> Here is a sample cfg, and lua code to reproduce this.
> 
> # Conf (I removed all defauts, timeout and co ..) :
> 
> frontend front-nodes
> 
> bind :80
> 
> # option to wait for the body before processing (mandatory for POST 
> requests)
> 
> option http-buffer-request
> 
> # default backend
> 
> default_backend be_test
> 
> http-request lua.manageRequests
> 
> # Lua :
> 
> *function */manageRequests/(txn)
> 
> -- create new postdata
> 
> *local *newPostData = /core/./concat/()
> 
> newPostData:add('POST /test.php HTTP/1.1\r\n')
> 
> newPostData:add('Host: test1\r\n')
> 
> newPostData:add('content-type: application/x-www-form-urlencoded\r\n')
> 
> *local *newBodyStr = 'var1=valueA=valueB'
> 
> *local *newBodyLen = string.len(newBodyStr)
> 
> newPostData:add('content-length: ' .. tostring(newBodyLen) .. '\r\n')
> 
> newPostData:add('\r\n')
> 
> newPostData:add(newBodyStr)
> 
> *local *newPostDataStr = tostring(newPostData:dump())
> 
> txn.req:send(newPostDataStr)
> 
> *end*
> 
> /core/./register_action/("manageRequests", { "http-req" }, 
/manageRequests/)
> 
> This is working well in haproxy 1.8.x (x : 14 to 18) but I get the 
> following error with 1.9.4 (same error with 1.9.2, others 1.9.x versions 
> not tested) :
> 
> Lua function 'manageRequests': runtime error: 0 from [C] method 'send', 
> /etc/haproxy/lua/bench.lua:97 C function line 80.
> 
> Line 97 of my lua file is txn.req:send(newPostDataStr)
> 
> Maybe I’m missing something on 1.9.x but cant find what, or maybe it’s a 
> bug, I can’t say.
> 
Hi Laurent,

It is not supported to modify an HTTP request/response calling Channel 
functions. It means calling following functions within an HTTP proxy is 
forbidden: Channel.get, Channel.dup, Channel.getline, Channel.set, 
Channel.append, Channel.send, Channel.forward.

Since HAProxy 1.9, a runtime error is triggered (because there is no way 
to do it during the configuration parsing, AFAIK). You may see this as a 
regression, but in fact, it was never really supported. But because of a 
lack check, no error was triggered. Because these functions totally 
hijacked the HTTP parser, if used, the result is undefined. There are 
many ways to crash HAProxy. Unfortunately, for now, there is no way to 
rewrite the HTTP messages in Lua.

-- 
Christopher Faulet




Re: haproxy segfault

2019-02-13 Thread Christopher Faulet

Le 13/02/2019 à 09:40, m...@mildis.org a écrit :

Thanks Vincent, got a core dump.

Here is the backtrace

#0  0x5577fb060375 in __pool_get_from_cache (pool=0x5577fb3f7540 
) at include/common/memory.h:199

199    include/common/memory.h: No such file or directory.

(gdb) bt full
#0  0x5577fb060375 in __pool_get_from_cache (pool=0x5577fb3f7540 
) at include/common/memory.h:199

     __ret = 0x5577fb583280
     item = 0x5577fb583280
#1  __pool_get_first (pool=0x5577fb3f7540 ) at 
include/common/memory.h:216

     cmp = 
#2  pool_alloc_dirty (pool=0x5577fb3f7540 ) at 
include/common/memory.h:258

     p = 0x5577fb583280
#3  pool_alloc (pool=0x5577fb3f7540 ) at 
include/common/memory.h:272

No locals.
#4  http_process_request (s=0x5577fb580d50, req=, 
an_bit=2048) at src/proto_http.c:2880

     sess = 0x5577fb57e7a0
     txn = 
     msg = 


So I confirm, it is a crash during the allocation of s->unique_id. The 
commit 451c5a88 fixes this bug.



--
Christopher Faulet



Re: error in haproxy 1.9 using txn.req:send in lua

2019-02-13 Thread Christopher Faulet

Le 13/02/2019 à 09:34, Laurent Penot a écrit :

Hi Thierry, guys,

When receiving a POST request on haproxy, I use lua to compute some 
values, and modify the body of the request before forwarding to the 
backend, so my backend can get these variables from the POST and use them.


Here is a sample cfg, and lua code to reproduce this.

# Conf (I removed all defauts, timeout and co ..) :

frontend front-nodes

bind :80

# option to wait for the body before processing (mandatory for POST 
requests)


option http-buffer-request

# default backend

default_backend be_test

http-request lua.manageRequests

# Lua :

*function */manageRequests/(txn)

-- create new postdata

*local *newPostData = /core/./concat/()

newPostData:add('POST /test.php HTTP/1.1\r\n')

newPostData:add('Host: test1\r\n')

newPostData:add('content-type: application/x-www-form-urlencoded\r\n')

*local *newBodyStr = 'var1=valueA=valueB'

*local *newBodyLen = string.len(newBodyStr)

newPostData:add('content-length: ' .. tostring(newBodyLen) .. '\r\n')

newPostData:add('\r\n')

newPostData:add(newBodyStr)

*local *newPostDataStr = tostring(newPostData:dump())

txn.req:send(newPostDataStr)

*end*

/core/./register_action/("manageRequests", { "http-req" }, /manageRequests/)

This is working well in haproxy 1.8.x (x : 14 to 18) but I get the 
following error with 1.9.4 (same error with 1.9.2, others 1.9.x versions 
not tested) :


Lua function 'manageRequests': runtime error: 0 from [C] method 'send', 
/etc/haproxy/lua/bench.lua:97 C function line 80.


Line 97 of my lua file is txn.req:send(newPostDataStr)

Maybe I’m missing something on 1.9.x but cant find what, or maybe it’s a 
bug, I can’t say.



Hi Laurent,

It is not supported to modify an HTTP request/response calling Channel 
functions. It means calling following functions within an HTTP proxy is 
forbidden: Channel.get, Channel.dup, Channel.getline, Channel.set, 
Channel.append, Channel.send, Channel.forward.


Since HAProxy 1.9, a runtime error is triggered (because there is no way 
to do it during the configuration parsing, AFAIK). You may see this as a 
regression, but in fact, it was never really supported. But because of a 
lack check, no error was triggered. Because these functions totally 
hijacked the HTTP parser, if used, the result is undefined. There are 
many ways to crash HAProxy. Unfortunately, for now, there is no way to 
rewrite the HTTP messages in Lua.


--
Christopher Faulet



Re: HAProxy in front of Docker Enterprise problem

2019-02-13 Thread Aleksandar Lazic
Hi.

Am 13.02.2019 um 00:21 schrieb Norman Branitsky:
> I have an HAProxy 1.7 server sitting in front of a number of Docker Enterprise
> Manager nodes and Worker nodes.
> 
> The Worker nodes don’t appear to have any problem with HAProxy terminating the
> SSL and connecting to them via HTTP.
> 
> The Manager nodes are the problem.
> 
> They insist on installing their own certificates (either self-signed or CA 
> signed).
>
> They will only listen to HTTPS traffic.
> 
> So my generic frontend_main-ssl says:
> 
> bind :443  ssl crt /etc/CONFIG/haproxy-1.7/certs/cert.pem
> 
>  
> 
> The backend has the following server statement:
> 
> server xxx 10.240.12.248:443 ssl verify none
> 
>  
> 
> But apparently this doesn’t work – the client gets the SSL certificate 
> provided
> by the HAProxy server
>
> instead of the certificate provided by the Manager node. This causes the 
> Manager
> node to barf.

Do you have added the manger certificates in the cert.pem?

> Do I have to make HAProxy listen on 8443 and just do a tcp frontend/backend 
> for
> the Manager nodes?

It's one possibility. This way makes the setup easier and I don't think that you
want to intercept some http layer stuff for the docker registry.

> Norman Branitsky

Regards
aleks




Re: haproxy segfault

2019-02-13 Thread me

Thanks Vincent, got a core dump.

Here is the backtrace

#0  0x5577fb060375 in __pool_get_from_cache (pool=0x5577fb3f7540 
) at include/common/memory.h:199
199    include/common/memory.h: No such file or directory.

(gdb) bt full
#0  0x5577fb060375 in __pool_get_from_cache (pool=0x5577fb3f7540 
) at include/common/memory.h:199
    __ret = 0x5577fb583280
    item = 0x5577fb583280
#1  __pool_get_first (pool=0x5577fb3f7540 ) at 
include/common/memory.h:216
    cmp = 
#2  pool_alloc_dirty (pool=0x5577fb3f7540 ) at 
include/common/memory.h:258
    p = 0x5577fb583280
#3  pool_alloc (pool=0x5577fb3f7540 ) at 
include/common/memory.h:272
No locals.
#4  http_process_request (s=0x5577fb580d50, req=, an_bit=2048) 
at src/proto_http.c:2880
    sess = 0x5577fb57e7a0
    txn = 
    msg = 
#5  0x5577fb08a726 in process_stream (t=, 
context=0x5577fb580d50, state=) at src/stream.c:1984
    max_loops = 199
    ana_list = 2048
    ana_back = 2048
    flags = 
    s = 0x5577fb580d50
    sess = 
    rqf_last = 
    rpf_last = 2147483648
    rq_prod_last = 7
    rq_cons_last = 0
    rp_cons_last = 7
    rp_prod_last = 0
    req_ana_back = 
    req = 0x5577fb580d60
    res = 0x5577fb580dc0
    si_f = 0x5577fb580fe8
    si_b = 0x5577fb581028
#6  0x5577fb1558c4 in process_runnable_tasks () at src/task.c:432
    t = 
    state = 
    ctx = 
    process = 
    t = 
    max_processed = 
#7  0x5577fb0cd551 in run_poll_loop () at src/haproxy.c:2621
    next = 
    exp = 
#8  run_thread_poll_loop (data=) at src/haproxy.c:2686
    ptif = 
    ptdf = 
    start_lock = 0
#9  0x5577fb0265d2 in main (argc=, argv=) at 
src/haproxy.c:3315
    tids = 0x5577fb475c80
    threads = 0x5577fb56e180
    i = 
    old_sig = {__val = {68097, 0, 0, 5, 65, 112, 1, 210453397509, 2, 0, 0, 
0, 0, 0, 0, 0}}
    blocked_sig = {__val = {1844674406710583, 18446744073709551615 
}}
---Type  to continue, or q  to quit---
    err = 
    retry = 
    limit = {rlim_cur = 4053, rlim_max = 4053}
    errmsg = 
"\000o\323Q\374\177\000\000\270n\323Q\374\177\000\000\n\000\000\000\000\000\000\000d\177\214:\305\177\000\000\020o\323Q\374\177\000\000\340\366@\373wU\000\000\360!B\373wU\000\000\306\306
 
;\305\177\000\000>\001\000\024\000\000\000\000\000m*U?z\020\310\n\000\000\000\000\000\000\000\020+z;\305\177\000\000\000\000\000"
    pidfd = 




Le Mardi, Février 12, 2019 23:20 CET, Vincent Bernat  a écrit:
 ❦ 12 février 2019 21:44 +01, Mildis :

> I'm struggling with Stretch/systemd to generate the coredump on crash.
> Even running haproxy by hand with ulimit -c unlimited does not
> generate a coredump.

Also install haproxy-dbgsym. You need to comment the chroot directive in
your HAProxy configuration if it's enabled. Also, you need to set the
core pattern to a fixed directory where haproxy user can write to, like:

sysctl -w kernel.core_pattern=/tmp/core.%p

Then, on next segfault, you should get your coredump.
--
Let me take you a button-hole lower.
-- William Shakespeare, "Love's Labour's Lost"


 


error in haproxy 1.9 using txn.req:send in lua

2019-02-13 Thread Laurent Penot
Hi Thierry, guys,



When receiving a POST request on haproxy, I use lua to compute some values, and 
modify the body of the request before forwarding to the backend, so my backend 
can get these variables from the POST and use them.

Here is a sample cfg, and lua code to reproduce this.

# Conf (I removed all defauts, timeout and co ..) :

frontend front-nodes

bind :80

# option to wait for the body before processing (mandatory for POST 
requests)

option http-buffer-request

# default backend

default_backend be_test

http-request lua.manageRequests



# Lua :

function manageRequests(txn)



-- create new postdata

local newPostData = core.concat()

newPostData:add('POST /test.php HTTP/1.1\r\n')

newPostData:add('Host: test1\r\n')

newPostData:add('content-type: application/x-www-form-urlencoded\r\n')



local newBodyStr = 'var1=valueA=valueB'

local newBodyLen = string.len(newBodyStr)



newPostData:add('content-length: ' .. tostring(newBodyLen) .. '\r\n')



newPostData:add('\r\n')

newPostData:add(newBodyStr)

local newPostDataStr = tostring(newPostData:dump())



txn.req:send(newPostDataStr)



end

core.register_action("manageRequests", { "http-req" }, manageRequests)





This is working well in haproxy 1.8.x (x : 14 to 18) but I get the following 
error with 1.9.4 (same error with 1.9.2, others 1.9.x versions not tested) :

Lua function 'manageRequests': runtime error: 0 from [C] method 'send', 
/etc/haproxy/lua/bench.lua:97 C function line 80.

Line 97 of my lua file is txn.req:send(newPostDataStr)



Maybe I’m missing something on 1.9.x but cant find what, or maybe it’s a bug, I 
can’t say.

Hope you can help.

Let me know if you need other informations.



Please find below the result of haproxy -vv for both versions used.







Infos on haproxy installation 1.8.14 :

HA-Proxy version 1.8.14-52e4d43 2018/09/20

Copyright 2000-2018 Willy Tarreau 



Build options :

  TARGET  = linux2628

  CPU = generic

  CC  = gcc

  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-unused-label

  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=yes 
USE_DEVICEATLAS=1 USE_SYSTEMD=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1



Default settings :

  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200



Built with OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018

Running on OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018

OpenSSL library supports TLS extensions : yes

OpenSSL library supports SNI : yes

OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Built with Lua version : Lua 5.3.4

Built with DeviceAtlas support.

Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Encrypted password support via crypt(3): yes

Built with multi-threading support.

Built with PCRE version : 8.42 2018-03-20

Running on PCRE version : 8.42 2018-03-20

PCRE library supports JIT : yes

Built with zlib version : 1.2.7

Running on zlib version : 1.2.7

Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")

Built with network namespace support.



Available polling systems :

  epoll : pref=300,  test result OK

   poll : pref=200,  test result OK

 select : pref=150,  test result OK

Total: 3 (3 usable), will use epoll.



Available filters :

[SPOE] spoe

[COMP] compression

[TRACE] trace





Infos on haproxy installation 1.9.4 :

HA-Proxy version 1.9.4 2019/02/06 - https://haproxy.org/

Build options :

  TARGET  = linux2628

  CPU = generic

  CC  = gcc

  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits

  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=yes 
USE_DEVICEATLAS=1 USE_SYSTEMD=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1



Default settings :

  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200



Built with OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018

Running on OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018

OpenSSL library supports TLS extensions : yes

OpenSSL library supports SNI : yes

OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Built with Lua version : Lua 5.3.4

Built with DeviceAtlas support.

Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Built with zlib version : 1.2.7

Running on zlib version : 1.2.7

Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")

Built with PCRE version : 8.42 2018-03-20

Running on PCRE version : 8.42 2018-03-20

PCRE library supports JIT : yes

Encrypted password support via crypt(3): yes

Built with