Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-07 Thread Willy Tarreau
Hi Pieter,

On Mon, Jan 07, 2019 at 09:24:24PM +0100, PiBa-NL wrote:
> For 1 part its always been broken (needing the short mailer timeout to send
> all expected mails), for the other part, at least until 1.8.14 it used to
> NOT send thousands of mails so that would be a regression in the current 1.9
> version that should get fixed on a shorter term.

OK, thanks for clarifying. Indeed a fix is needed then.

> > I'm hesitant between
> > merging this in the slow category or the broken one. My goal with "broken"
> > was to keep the scripts that trigger broken behaviours that need to be
> > addressed, rather than keep broken scripts.
> Indeed keeping broken scripts wouldn't be help-full in the long run, unless
> there is still the intent to fix them. It isn't what the makefile says about
> 'LEVEL 5' though. It says its for 'broken scripts' and to quickly disable
> them, not as you write here for scripts that show 'broken haproxy behavior'.

I know :-)  For me the "broken" scripts should be the experimental ones,
those for known broken haproxy behavior definitely are worth keeping
because once the issue is fixed they can be renamed and included in the
standard test series. It's just that until a bug is fixed, reminding in
loops about it is counter-productive. Especially when these ones cause
time-outs or whatever issue some of our bugs used to cause!

> >   My goal is to make sure we
> > never consider it normal to have failures in the regular test suite,
> > otherwise you know how it becomes, just like compiler warnings, people
> > say "oh I didn't notice this new error in the middle of all other ones".
> Agreed, though i will likely fall into repeat some day, apology in advance
> ;)..

No, you're welcome!

> I guess we could 'fix' the regtest by specifying the 'timeout mail
> 200', that would fix it for 1.7 and 1.8.. And might help for 1.9
> regressiontests and to get it fixed to at least not send thousands of mails.
> We might forget about the short time requirement then though, which seems
> strange as well. And the test wouldn't be 1.6 compatible as it doesn't have
> that setting at all.

I honestly have no idea. My goal is that this bug gets fixed at least if
it's a regression.

> > Thus probably the best thing to do is to use it at level 5 so that it's
> > easier to work on the bug without triggering false positives when doing
> > regression testing.
> > 
> > What's your opinion ?
> 
> With a changed description for 'level 5' being 'shows broken haproxy
> behavior, to be fixed in a future release' i think it would fit in there
> nicely. Can you change the starting letter of the .vtc test (and the .lua
> and reference to that) to 'k' during committing? Or shall i re-send it?

I can do it, don't worry.

> p.s. What do you think about the 'naming' of the test?
> 'k_healthcheckmail.vtc' or 'k0.vtc' personally i don't think the
> 'numbering' of tests makes them easier to use.?.

I've been thinking about such things as well. As I mentioned earlier, I
definitely think that our reg-testing suite's organisation will continue
to evolve. It's something new for this project and we're discovering a
number of unexpected side effects that progressively make us adapt, and
as long as we can continue to issue "make regtest", I think everyone will
be OK.

Cheers,
Willy



Re: compression in defaults happens twice with 1.9.0

2019-01-07 Thread PiBa-NL

Hi Christopher,

Op 7-1-2019 om 16:32 schreef Christopher Faulet:

Le 06/01/2019 à 16:22, PiBa-NL a écrit :

Hi List,

Using both 1.9.0 and 2.0-dev0-909b9d8 compression happens twice when
configured in defaults.
This was noticed by user walle303 on IRC.

Seems like a bug to me as 1.8.14 does not show this behavior. Attached a
little regtest that reproduces the issue.

Can someone take a look, thanks in advance.



Hi Pieter,

Here is the patch that should fix this issue. Could you confirm please ?

Thanks

Works for me. Thanks!

Regards,
PiBa-NL (Pieter)



Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-07 Thread PiBa-NL

Hi Willy,
Op 7-1-2019 om 15:25 schreef Willy Tarreau:

Hi Pieter,

On Sun, Jan 06, 2019 at 04:38:21PM +0100, PiBa-NL wrote:

The 23654 mails received for a failed server is a bit much..

I agree. I really don't know much how the mails work to be honest, as
I have never used them. I remember that we reused a part of the tcp-check
infrastructure because by then it offered a convenient way to proceed with
send/expect sequences. Maybe there's something excessive in the sequence
there, such as a certain status code being expected at the end while the
mail succeeds, I don't know.

Given that this apparently has always been broken,
For 1 part its always been broken (needing the short mailer timeout to 
send all expected mails), for the other part, at least until 1.8.14 it 
used to NOT send thousands of mails so that would be a regression in the 
current 1.9 version that should get fixed on a shorter term.

I'm hesitant between
merging this in the slow category or the broken one. My goal with "broken"
was to keep the scripts that trigger broken behaviours that need to be
addressed, rather than keep broken scripts.
Indeed keeping broken scripts wouldn't be help-full in the long run, 
unless there is still the intent to fix them. It isn't what the makefile 
says about 'LEVEL 5' though. It says its for 'broken scripts' and to 
quickly disable them, not as you write here for scripts that show 
'broken haproxy behavior'.

  My goal is to make sure we
never consider it normal to have failures in the regular test suite,
otherwise you know how it becomes, just like compiler warnings, people
say "oh I didn't notice this new error in the middle of all other ones".
Agreed, though i will likely fall into repeat some day, apology in 
advance ;).. I guess we could 'fix' the regtest by specifying the 
'timeout mail 200', that would fix it for 1.7 and 1.8.. And might help 
for 1.9 regressiontests and to get it fixed to at least not send 
thousands of mails. We might forget about the short time requirement 
then though, which seems strange as well. And the test wouldn't be 1.6 
compatible as it doesn't have that setting at all.

Thus probably the best thing to do is to use it at level 5 so that it's
easier to work on the bug without triggering false positives when doing
regression testing.

What's your opinion ?


With a changed description for 'level 5' being 'shows broken haproxy 
behavior, to be fixed in a future release' i think it would fit in there 
nicely. Can you change the starting letter of the .vtc test (and the 
.lua and reference to that) to 'k' during committing? Or shall i re-send it?


p.s. What do you think about the 'naming' of the test? 
'k_healthcheckmail.vtc' or 'k0.vtc' personally i don't think the 
'numbering' of tests makes them easier to use.?.



thanks,
Willy


Regards,

PiBa-NL (Pieter)




RE: Send-proxy not modifying some traffic with proxy ip/port details instead retaining same client ip port

2019-01-07 Thread Mohandass, Roobesh
Hi Lukas,

Provided patch we tried and that didn’t help.

-Roobesh G M

-Original Message-
From: Mohandass, Roobesh 
Sent: Wednesday, December 26, 2018 6:45 PM
To: lu...@ltri.eu
Cc: haproxy 
Subject: RE: Send-proxy not modifying some traffic with proxy ip/port details 
instead retaining same client ip port

Hello Lukas,

Sure, we will try the attached patch work and share the feedback to you/team.

Thanks for help.

Kind regards,
Roobesh G M

-Original Message-
From: lu...@ltri.eu 
Sent: Wednesday, December 26, 2018 6:30 PM
To: Mohandass, Roobesh 
Cc: haproxy 
Subject: Re: Send-proxy not modifying some traffic with proxy ip/port details 
instead retaining same client ip port

This email originated from outside of the organization. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.

Hello Roobesh,


On Wed, 26 Dec 2018 at 11:49, Mohandass, Roobesh  
wrote:
> RGM: This is reproducible anywhere production/lab but when we see this 
> behavior is a questions as I said out of so many large number of 
> requests only for some we will observe this behavior (but can be caught very 
> quickly in less than 30mins span of time).

Thanks for clarifying.

I don't have the time to reproduce this today, but could you try the attached 
patch please?

It changes the logic around those calls, uses the getsockopt/SO_ORIGINAL_DST 
first, checks the return value and then calls getsockname(). Previously we did 
not check the return value of the getsockopt/SO_ORIGINAL_DST. My hope is that 
that call returns an error when it returns wrong data - if that's the case, the 
patch should fix the issue without sacrificing other haproxy features.

If that doesn't fix the problem we probably have to talk to the 
kernel/netfilter folks.


Thanks,
Lukas


1.8-SO_ORIGINAL_DST-change.diff
Description: 1.8-SO_ORIGINAL_DST-change.diff


Re: [PATCH] ssl certificates load speedup and dedup (pem/ctx)

2019-01-07 Thread Emeric Brun
Hi Manu,

On 1/7/19 5:59 PM, Emmanuel Hocdet wrote:
> It's better with patches…
> 
>> Le 7 janv. 2019 à 17:57, Emmanuel Hocdet > > a écrit :
>>
>> Hi,
>>
>> Following the first patch series (included).
>> The goal is to deduplicate common certificates in memory and in shared pem 
>> files.
>>
>> PATCH 7/8 is only for boringssl (directive to dedup certificate in memory 
>> for ctx)
>> Last patch should be the more interesting:
>> [PATCH 8/8] MINOR: ssl: add "issuer-path" directive.
>>
>> Certificates loaded with "crt" and "crt-list" commonly share the same
>> intermediate certificate in PEM file. "issuer-path" is a global
>> directive to share intermediate certificate in a directory. If
>> certificate chain is not included in certificate PEM file, haproxy
>> will complete chain if issuer match the first certificate of the chain
>> stored via "issuer-path" directive. Such chains will be shared in ssl
>> shared memory.
>> . "issuer-path" directive can be set several times.
>> . only sha1 key identifier is supported (rfc5280 4.2.1.2. (1))
>>
>> If you want to test it, the patch series can be apply to haproxy-dev or 
>> haproxy-1.9.
>>
>> Feedbacks are welcome :)
>>
>> ++
>> Manu
>>
> 
> 

We have to double check this patches proposal because we have a pending feature 
in roadmap which could heavily collide: to load only one time a certificate per 
fs entry.

For us it is a mandatory feature to allow a clean "hot" update of certificates. 
(the key to identify a certificate to update will be the path on the fs, or at 
least, the base path)

Emeric



Re: [PATCH] ssl certificates load speedup and dedup (pem/ctx)

2019-01-07 Thread Emmanuel Hocdet
It's better with patches…Le 7 janv. 2019 à 17:57, Emmanuel Hocdet  a écrit :Hi,Following the first patch series (included).The goal is to deduplicate common certificates in memory and in shared pem files.PATCH 7/8 is only for boringssl (directive to dedup certificate in memory for ctx)Last patch should be the more interesting:[PATCH 8/8] MINOR: ssl: add "issuer-path" directive.Certificates loaded with "crt" and "crt-list" commonly share the sameintermediate certificate in PEM file. "issuer-path" is a globaldirective to share intermediate certificate in a directory. Ifcertificate chain is not included in certificate PEM file, haproxywill complete chain if issuer match the first certificate of the chainstored via "issuer-path" directive. Such chains will be shared in sslshared memory.. "issuer-path" directive can be set several times.. only sha1 key identifier is supported (rfc5280 4.2.1.2. (1))If you want to test it, the patch series can be apply to haproxy-dev or haproxy-1.9.Feedbacks are welcome :)++Manu

0001-REORG-ssl-promote-cert_key_and_chain-handling.patch
Description: Binary data


0002-MINOR-ssl-use-STACK_OF-for-chain-certs.patch
Description: Binary data


0003-MINOR-ssl-SSL_CTX_set1_chain-compatibility.patch
Description: Binary data


0004-MINOR-ssl-used-cert_key_and_chain-func-in-load_cert_.patch
Description: Binary data


0005-BUG-MINOR-ssl-fix-ssl_sock_load_multi_cert-init-vars.patch
Description: Binary data


0006-CLEANUP-ssl-ssl_sock_load_crt_file_into_ckch.patch
Description: Binary data


0007-MINOR-ssl-dedup-cert-chain-for-bind-ctx-boringssl.patch
Description: Binary data


0008-MINOR-ssl-add-issuer-path-directive.patch
Description: Binary data


[PATCH] ssl certificates load speedup and dedup (pem/ctx)

2019-01-07 Thread Emmanuel Hocdet
Hi,

Following the first patch series (included).
The goal is to deduplicate common certificates in memory and in shared pem 
files.

PATCH 7/8 is only for boringssl (directive to dedup certificate in memory for 
ctx)
Last patch should be the more interesting:
[PATCH 8/8] MINOR: ssl: add "issuer-path" directive.

Certificates loaded with "crt" and "crt-list" commonly share the same
intermediate certificate in PEM file. "issuer-path" is a global
directive to share intermediate certificate in a directory. If
certificate chain is not included in certificate PEM file, haproxy
will complete chain if issuer match the first certificate of the chain
stored via "issuer-path" directive. Such chains will be shared in ssl
shared memory.
. "issuer-path" directive can be set several times.
. only sha1 key identifier is supported (rfc5280 4.2.1.2. (1))

If you want to test it, the patch series can be apply to haproxy-dev or 
haproxy-1.9.

Feedbacks are welcome :)

++
Manu

> Le 12 déc. 2018 à 12:23, Emmanuel Hocdet  a écrit :
> 
> 
> Hi,
> 
> I tried to improve the haproxy loading time with a lot of certificates, and 
> see a double file
> open for each certificate (one for private-key and one for the cert/chain).
> Multi-cert loading part have not this issue and is good candidate for sharing 
> code:
> patches is this work with factoring/cleanup/fix.
> 
> About speed: PEM file with private key in first position is far better.
> 
> If you can consider this patches?
> 



Re: compression in defaults happens twice with 1.9.0

2019-01-07 Thread Christopher Faulet

Le 06/01/2019 à 16:22, PiBa-NL a écrit :

Hi List,

Using both 1.9.0 and 2.0-dev0-909b9d8 compression happens twice when
configured in defaults.
This was noticed by user walle303 on IRC.

Seems like a bug to me as 1.8.14 does not show this behavior. Attached a
little regtest that reproduces the issue.

Can someone take a look, thanks in advance.



Hi Pieter,

Here is the patch that should fix this issue. Could you confirm please ?

Thanks
--
Christopher Faulet
>From ff3c04e40ab0f4a7176ef25835b40f0d068150cf Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 7 Jan 2019 14:41:59 +0100
Subject: [PATCH] BUG/MINOR: compression: Disable it if another one is already
 in progress

Since the commit 9666720c8 ("BUG/MEDIUM: compression: Use the right buffer
pointers to compress input data"), the compression can be done twice. The first
time on the frontend and the second time on the backend. This may happen by
configuring the compression in a default section.

To fix the bug, when the response is checked to know if it should be compressed
or not, if the flag HTTP_MSGF_COMPRESSING is set, the compression is not
performed. It means it is already handled by a previous compression filter.

Thanks to Pieter (PiBa-NL) to report this bug.

This patch must be backported to 1.9.
---
 src/flt_http_comp.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/flt_http_comp.c b/src/flt_http_comp.c
index 380366b06..f8fbf4c9d 100644
--- a/src/flt_http_comp.c
+++ b/src/flt_http_comp.c
@@ -808,6 +808,10 @@ http_select_comp_reshdr(struct comp_state *st, struct stream *s, struct http_msg
 	if (st->comp_algo == NULL)
 		goto fail;
 
+	/* compression already in progress */
+	if (msg->flags & HTTP_MSGF_COMPRESSING)
+		goto fail;
+
 	/* HTTP < 1.1 should not be compressed */
 	if (!(msg->flags & HTTP_MSGF_VER_11) || !(txn->req.flags & HTTP_MSGF_VER_11))
 		goto fail;
@@ -902,6 +906,10 @@ htx_select_comp_reshdr(struct comp_state *st, struct stream *s, struct http_msg
 	if (st->comp_algo == NULL)
 		goto fail;
 
+	/* compression already in progress */
+	if (msg->flags & HTTP_MSGF_COMPRESSING)
+		goto fail;
+
 	/* HTTP < 1.1 should not be compressed */
 	if (!(msg->flags & HTTP_MSGF_VER_11) || !(txn->req.flags & HTTP_MSGF_VER_11))
 		goto fail;
@@ -1391,7 +1399,6 @@ check_implicit_http_comp_flt(struct proxy *proxy)
 	fconf->conf = NULL;
 	fconf->ops  = _ops;
 	LIST_ADDQ(>filter_configs, >list);
-
  end:
 	return err;
 }
-- 
2.20.1



Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-07 Thread Willy Tarreau
Hi Pieter,

On Sun, Jan 06, 2019 at 04:38:21PM +0100, PiBa-NL wrote:
> Hi,
> 2 weeks passed without reply, so a little hereby a little 'bump'.. I know
> everyone has been busy, but would be nice to get test added or at least the
> biggest issue of the 'mailbomb' fixed before next release. If its
> 'scheduled' to get looked at later thats okay. Just making sure it aint
> forgotten about :).

Sure, and sorry for the silence, but as you guessed, I also think
everyone got busy.

> The 23654 mails received for a failed server is a bit much..

I agree. I really don't know much how the mails work to be honest, as
I have never used them. I remember that we reused a part of the tcp-check
infrastructure because by then it offered a convenient way to proceed with
send/expect sequences. Maybe there's something excessive in the sequence
there, such as a certain status code being expected at the end while the
mail succeeds, I don't know.

Given that this apparently has always been broken, I'm hesitant between
merging this in the slow category or the broken one. My goal with "broken"
was to keep the scripts that trigger broken behaviours that need to be
addressed, rather than keep broken scripts. My goal is to make sure we
never consider it normal to have failures in the regular test suite,
otherwise you know how it becomes, just like compiler warnings, people
say "oh I didn't notice this new error in the middle of all other ones".

Thus probably the best thing to do is to use it at level 5 so that it's
easier to work on the bug without triggering false positives when doing
regression testing.

What's your opinion ?

thanks,
Willy



Re: Replicated stick tables have absurd values for conn_cur

2019-01-07 Thread Emerson Gomes
Hello Tim,

Just to update you, I have tried the patch, and while I didnt see any new
occurences of the underflow, HAProxy started to crash constantly...

Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
Current worker #1 (14366) exited with code 139 (Segmentation fault)
Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
exit-on-failure: killing every workers with SIGTERM
Jan 07 10:32:37 afrodite haproxy[14364]: [WARNING] 006/103237 (14364) : All
workers exited. Exiting... (139)

I am not sure if the segfaults are related to the patch - Continuing
investigation...

BR.,
Emerson


Em qui, 3 de jan de 2019 às 21:48, Emerson Gomes 
escreveu:

> Hello Tim,
>
> Thanks a lot for the patch. I will try it out and let you know the results.
>
> BR.,
> Emerson
>
> Em qui, 3 de jan de 2019 às 21:18, Tim Düsterhus 
> escreveu:
>
>> Emerson,
>>
>> Am 03.01.19 um 21:58 schrieb Emerson Gomes:
>> > However, the underflow scenario only seem to be possible if the peers
>> are
>> > sending relative values, rather than absolute ones.
>>
>> I don't believe so. My hypothetical timeline was created with absolute
>> values in mind.
>>
>> > Apparently both cases (absolut and offset values) exist.
>> > I am looking at src/peers.c to understand how the peer protocol works
>> and
>> > maybe create the patch you proposed (do not decrement counter if
>> already 0).
>>
>> I attached a patch which I believe fixes the issue (checking for 0 when
>> decrementing, not touching the peers).
>>
>> > However it seems that a real fix would require some big changes on the
>> > protocol itself.
>>
>> Yes I agree.
>>
>> > One potencial implementation I could imagine, would be to, rather than
>> > broadcasting absolute values or offsets, each neighbor peer could report
>> > the amount of connection it has locally only, and it would be up to the
>> > local node to resolve the actual value by adding up the different values
>> > received from all neighbors.
>>
>> Yes, that probably would be the most reliable implementation. It takes
>> up more memory and processing power, though.
>>
>> > Not even sure if my understading is correct, but it's task currently
>> out of
>> > my reach.
>> > Should I do a bug report somewhere? :)
>> >
>>
>> I suspect that the developers will notice this thread. A proper issue
>> tracker is a wish of mine as well
>> (https://www.mail-archive.com/haproxy@formilux.org/msg32239.html).
>>
>> Best regards
>> Tim Düsterhus
>>
>


Re: [RFC PATCH] couple of reg-tests

2019-01-07 Thread Frederic Lecaille

On 1/2/19 2:17 PM, Jarno Huuskonen wrote:

Hello,


Hello Jarno,

Sorry for this late reply.


I started playing with reg-tests and came up with couple of regtests.
Is there a better subdirectory for these than http-rules ? Maybe
map/b0.vtc and converter/h* ?


No, at this time it is ok.



I'm attaching the tests for comments.



reg-tests/http-rules/h3.vtc fails on my side due to a typo in the 
regex with this error:


 h10.0 CLI regexp error: 'missing opening brace after \o' (@48) 
(^0x[a-f0-9]+ example\.org https://www\.example.\org\n0x[a-f0-9]+ 
subdomain\.example\.org https://www\.subdomain\.example\.org\n$)


.\org shoulb be replaced by \.org

Could you check on your side why you did not notice this issue please?

After checking this issue we will merge your patches. Great work!


Thank you a lot.

Fred.



Re: [THANKS] Re: Deadlock lua when calling register_task inside action function

2019-01-07 Thread Willy Tarreau
On Mon, Jan 07, 2019 at 10:46:08AM +0100, Thierry Fournier wrote:
> great ! 
> 
> Willy, could you apply the attached patch ?

applied, thanks!
Willy



Re: [PATCH] Re: Question re lua and tcp content

2019-01-07 Thread Willy Tarreau
On Sun, Jan 06, 2019 at 07:46:15PM +0100, Thierry Fournier wrote:
> Hi,
> 
> Thanks fir the bug report. It is fixed by attached patch.
> 
> Willy, could you merge this patch ?

Sure, now applied, thanks!

Willy



[THANKS] Re: Deadlock lua when calling register_task inside action function

2019-01-07 Thread Thierry Fournier
great ! 

Willy, could you apply the attached patch ?

thanks
Thierry


0001-BUG-MEDIUM-dead-lock-when-Lua-tasks-are-trigerred.patch
Description: Binary data

> On 6 Jan 2019, at 19:11, Thierry Fournier  
> wrote:
> 
> Hi,
> 
> Thanks for the bug report.
> Your "uneducated guess" was right.
> 
> Could you test the patch in attachment ?
> 
> Thanks
> Thierry
> <0001-BUG-MEDIUM-dead-lock-when-Lua-tasks-are-trigerred.patch>
> 
>> On 2 Jan 2019, at 01:40, Flakebi  wrote:
>> 
>> Hi,
>> 
>> I am currently trying to send an http request for some incoming requests to 
>> a logging server.
>> The incoming connection should be handled independent of the logging, so it 
>> should be possible
>> that the incoming connection is already closed, while the logging is still 
>> in progress.
>> 
>> I use lua to create a new http action. This action then registers a task to 
>> decouple the logging.
>> And the rest happens inside this task.
>> 
>> As far as I understand the lua api documentation, it should be possible to 
>> call `register_task` [1]
>> inside an action. However, HAProxy deadlocks when a request comes in.
>> My uneducated guess would be that HAProxy hangs in a spinlock because it 
>> tries to create a lua context
>> while still executing lua.
>> 
>> I tested version 1.8.14 and 1.9.0 and they show the same behaviour.
>> The lua version is 5.3.5.
>> 
>> Am I missing something? Should this be possible at all?
>> 
>> gdb tells the following stacktrace. HAProxy never leaves the hlua_ctx_init 
>> function:
>> 
>> #0  0x08eb8d04fd0f in hlua_ctx_init ()
>> #1  0x08eb8d0533f0 in ?? ()
>> #2  0x6de619c40d27 in ?? () from /usr/lib/liblua.so.5.3
>> #3  0x6de619c4db85 in ?? () from /usr/lib/liblua.so.5.3
>> #4  0x6de619c403b3 in ?? () from /usr/lib/liblua.so.5.3
>> #5  0x6de619c410a9 in lua_resume () from /usr/lib/liblua.so.5.3
>> #6  0x08eb8d04cf9a in ?? ()
>> #7  0x08eb8d05197e in ?? ()
>> #8  0x08eb8d05d037 in http_req_get_intercept_rule ()
>> #9  0x08eb8d063f96 in http_process_req_common ()
>> #10 0x08eb8d08f0f1 in process_stream ()
>> #11 0x08eb8d156b08 in process_runnable_tasks ()
>> #12 0x08eb8d0d443b in ?? ()
>> #13 0x08eb8d02b660 in main ()
>> 
>> 
>> haproxy.cfg:
>> global
>>   lua-load mytest.lua
>> 
>> listen server
>>   modehttp
>>   bind :7999
>>   http-request lua.test
>>   server s 127.0.0.1:8081
>> 
>> 
>> mytest.lua:
>> function test(txn)
>>   -- This works when commenting out the task creation
>>   core.register_task(function()
>>   core.Warning("Aha")
>>   end)
>> end
>> 
>> core.register_action("test", { "http-req" }, test, 0)
>> 
>> 
>> 
>> Cheers,
>> Flakebi
>> 
>> [1] 
>> https://www.arpalert.org/src/haproxy-lua-api/1.8/index.html#core.register_task
> 



Re: [PATCH] REGTEST: filters: add compression test

2019-01-07 Thread Frederic Lecaille

On 12/23/18 11:38 PM, PiBa-NL wrote:

Added LUA requirement into the test..

Op 23-12-2018 om 23:05 schreef PiBa-NL:

Hi Frederic,


Hi Pieter,

Sorry for this late reply.

As requested hereby the regtest send for inclusion into the git 
repository. Without randomization and with your .diff applied. Also 
outputting expected and actual checksum if the test fails so its clear 
that that is the issue detected.


Is it okay like this? Should the blob be bigger? As you mentioned 
needing a 10MB output to reproduce the original issue on your machine?


It is OK like that.

Note that you patch do not add reg-test/filters/common.pem which could 
be a symlink to ../ssl/common.pem.


Also note that since 8f16148Christopher's commit, we add such a line 
where possible:


${no-htx} option http-use-htx

We should also rename your test files to reg-test/filters/h0.*

Thank you.


Fred.