Re: SOAP service healthcheck

2018-12-07 Thread Māra Grīnberga
Thanks for the tip!

Mara

On Sat, Dec 8, 2018, 00:00 Baptiste  Hi,
>
> You can also forge a http post with the tcp-check. This would be less
> hacky.
>
> Baptiste
>
>
> Le jeu. 6 déc. 2018 à 09:11, Māra Grīnberga  a écrit :
>
>> I mean, thanks! I'll look into it!
>>
>> Mara
>>
>> On Thu, Dec 6, 2018, 10:04 Jarno Huuskonen >
>>> Hi,
>>>
>>> On Thu, Dec 06, Māra Grīnberga wrote:
>>> > I'm new to Haproxy and I've a task for which I can't seem to find a
>>> > solution online. Probably, I'm not looking in the right places.
>>> > I need to check if a SOAP service responds before sending requests to
>>> the
>>> > server. I've read about this option:
>>> >option httpchk GET /check
>>> > http-check expect string OK
>>> > I think, it's what I need. But is there a way to pass SOAP envelope to
>>> this
>>> > "/check" service?
>>>
>>> Do you mean POST to /check where post body is the SOAP envelope ?
>>>
>>> > Any suggestions and help would be appreciated!
>>>
>>> I think you can (ab)use http version to send body with option httpchk
>>> (
>>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20httpchk
>>> )
>>>
>>> One example for sending xml post:
>>> https://discourse.haproxy.org/t/healthcheck-with-xml-post-in-body/733
>>>
>>> -Jarno
>>>
>>> --
>>> Jarno Huuskonen
>>>
>>


Re: SOAP service healthcheck

2018-12-07 Thread Christopher Cox

tcp-check is what we used for this.

On 12/7/18 3:59 PM, Baptiste wrote:

Hi,

You can also forge a http post with the tcp-check. This would be less hacky.

Baptiste


Le jeu. 6 déc. 2018 à 09:11, Māra Grīnberga > a écrit :


I mean, thanks! I'll look into it!

Mara

On Thu, Dec 6, 2018, 10:04 Jarno Huuskonen mailto:jarno.huusko...@uef.fi> wrote:

Hi,

On Thu, Dec 06, Māra Grīnberga wrote:
 > I'm new to Haproxy and I've a task for which I can't seem to
find a
 > solution online. Probably, I'm not looking in the right places.
 > I need to check if a SOAP service responds before sending
requests to the
 > server. I've read about this option:
 >        option httpchk GET /check
 >         http-check expect string OK
 > I think, it's what I need. But is there a way to pass SOAP
envelope to this
 > "/check" service?

Do you mean POST to /check where post body is the SOAP envelope ?

 > Any suggestions and help would be appreciated!

I think you can (ab)use http version to send body with option
httpchk

(https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20httpchk)

One example for sending xml post:
https://discourse.haproxy.org/t/healthcheck-with-xml-post-in-body/733

-Jarno

-- 
Jarno Huuskonen






Re: SOAP service healthcheck

2018-12-07 Thread Baptiste
Hi,

You can also forge a http post with the tcp-check. This would be less hacky.

Baptiste


Le jeu. 6 déc. 2018 à 09:11, Māra Grīnberga  a écrit :

> I mean, thanks! I'll look into it!
>
> Mara
>
> On Thu, Dec 6, 2018, 10:04 Jarno Huuskonen 
>> Hi,
>>
>> On Thu, Dec 06, Māra Grīnberga wrote:
>> > I'm new to Haproxy and I've a task for which I can't seem to find a
>> > solution online. Probably, I'm not looking in the right places.
>> > I need to check if a SOAP service responds before sending requests to
>> the
>> > server. I've read about this option:
>> >option httpchk GET /check
>> > http-check expect string OK
>> > I think, it's what I need. But is there a way to pass SOAP envelope to
>> this
>> > "/check" service?
>>
>> Do you mean POST to /check where post body is the SOAP envelope ?
>>
>> > Any suggestions and help would be appreciated!
>>
>> I think you can (ab)use http version to send body with option httpchk
>> (
>> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20httpchk
>> )
>>
>> One example for sending xml post:
>> https://discourse.haproxy.org/t/healthcheck-with-xml-post-in-body/733
>>
>> -Jarno
>>
>> --
>> Jarno Huuskonen
>>
>


Re: sample fetch: add bc_http_major

2018-12-07 Thread Aleksandar Lazic
Hi Jerome.

Am 07.12.2018 um 15:37 schrieb Jerome Magnin:
> Hi Aleks,
> 
> On Fri, Dec 07, 2018 at 01:46:53PM +0100, Aleksandar Lazic wrote:
>> Hi Jerome.
>> [...] 
>> I suggest to use a dedicated function for that, jm2c.
>>
>> { "bc_http_major", smp_fetch_bc_http_major, 0, NULL, SMP_T_SINT, 
>> SMP_USE_L4SRV },
>>
> 
> If you look at src/ssl_sock.c there are several fetches applying to both
> frontend and backend connection, and each pair uses the same function. I
> shamelessly copied^W^Wtook example from them.

Got it. Thanks for answer.

> Jérôme

Regards
Aleks



Re: sample fetch: add bc_http_major

2018-12-07 Thread Jerome Magnin
Hi Aleks,

On Fri, Dec 07, 2018 at 01:46:53PM +0100, Aleksandar Lazic wrote:
> Hi Jerome.
> [...] 
> I suggest to use a dedicated function for that, jm2c.
> 
> { "bc_http_major", smp_fetch_bc_http_major, 0, NULL, SMP_T_SINT, 
> SMP_USE_L4SRV },
> 

If you look at src/ssl_sock.c there are several fetches applying to both
frontend and backend connection, and each pair uses the same function. I
shamelessly copied^W^Wtook example from them.

Jérôme



Re: Simply adding a filter causes read error

2018-12-07 Thread flamesea12
Hi
Thanks for the reply.
I have a test env with 3 identical servers( 8 core cpu and 32GB memory), one 
for wrk, one for nginx, and one for haproxy.
The network looks like wrk => haproxy => nginx. I have tuned OS settings like 
open file limits, etc.
And the test html file is default nginx index.html. There's no error when 
testing wrk => nginx, wrk => haproxy(no filter) => nginx.
Error began to appear if I add filter.
I've thought of performance affected by compression, but that's not true, 
because the request header sent by wrk does not accept compression.
I've even change the following code:
static inttrace_attach(struct stream *s, struct filter *filter){        struct 
trace_config *conf = FLT_CONF(filter);        return 0; // ignore this filter 
to avoid performance down since there are many print
And test with
    filter trace
This way I think there will be no performance affect, since the filter is 
ignored in the very beginning.
But still there are read errors.
Please let me know if you need more information.
Thanks,

 - Original Message -
 From: Aleksandar Lazic 
 To: flamese...@yahoo.co.jp; "haproxy@formilux.org"  
 Date: 2018/12/7, Fri 22:12
 Subject: Re: Simply adding a filter causes read error
   
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000 
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
>    - Original Message -
>    *From:* "flamese...@yahoo.co.jp" 
>    *To:* Aleksandar Lazic ; "haproxy@formilux.org"
>    
>    *Date:* 2018/12/7, Fri 09:06
>    *Subject:* Re: Simply adding a filter causes read error
> 
>    Hi,
> 
>    Thanks for the reply, I thought the mail format is corrupted..
> 
>    I tried option http-pretend-keepalive, seems read error is gone, but 
>timeout
>    error raised(maybe its because the 1000 connections of wrk)
> 
>    Thanks
> 
>        - Original Message -
>        *From:* Aleksandar Lazic 
>        *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
>
>        *Date:* 2018/12/6, Thu 23:53
>        *Subject:* Re: Simply adding a filter causes read error
> 
>        Hi.
> 
>        Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
>        :
>        > Hi,
>        >
>        > I have a haproxy(v1.8.14) in front of several nginx backends,
>        everything works
>        > fine until I add compression in haproxy.
> 
>        There is a similar thread about this topic.
> 
>        https://www.mail-archive.com/haproxy@formilux.org/msg31897.html 
> 
>        Can you try to add this option in your config and see if the problem is
>        gone.
> 
>        option http-pretend-keepalive
> 
>        Regards
>        Aleks
> 
>        > My config looks like this:
>        >
>        > ### Config start #
>        > global
>        >     maxconn         100
>        >     daemon
>        >     nbproc 2
>        >
>        > defaults
>        >     retries 3
>        >     option redispatch
>        >     timeout client  60s
>        >     timeout connect 60s
>        >     timeout server  60s
>        >     timeout http-request 60s
>        >     timeout http-keep-alive 60s
>        >
>        > frontend web
>        >     bind *:8000
>        >
>        >     mode http
>        >     default_backend app
>        > backend app
>        >     mode http
>        >     #filter compression
>        >     #filter trace 
>        >     server nginx01 10.0.3.15:8080
>        > ### Config end #
>        >
>        >
>        > Lua script used in wrk:
>        > a.lua:
>        >
>        > local count = 0
>        >
>        > request = function()
>        >     local url = "/?count=" .. count
>        >     count = count + 1
>        >     return wrk.format(
>        >     'GET',
>        >     url
>        >     )
>        > end
>        >
>        >
>        > 01. wrk test against nginx: everything if OK
>        >
>        > wrk -c 1000 -s a.lua http://10.0.3.15:8080 
>        > Running 10s test @ http://10.0.3.15:8080 
>        >   2 threads and 1000 connections
>        >   Thread Stats   Avg      Stdev     Max   +/- Stdev
>        >     Laten

Re: Simply adding a filter causes read error

2018-12-07 Thread Aleksandar Lazic
Hi.

Am 07.12.2018 um 08:37 schrieb flamese...@yahoo.co.jp:
> Hi
> 
> I tested more, and found that even with option http-pretend-keepalive enabled,
> 
> if I increase the test duration , the read error still appear.

Please can you show us some logs when the error appears.
Can you also tell us some data about the Server on which haproxy, wrk and nginx
is running and how the network setup looks like.

maybe you reach some system limits as Compression requires some more os/hw
resources.

Regards
Aleks

> Running 3m test @ http://10.0.3.15:8000
>   10 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    19.84ms   56.36ms   1.34s    92.83%
>     Req/Sec    23.11k     2.55k   50.64k    87.10%
>   45986426 requests in 3.33m, 36.40GB read
>   Socket errors: connect 0, read 7046, write 0, timeout 0
> Requests/sec: 229817.63
> Transfer/sec:    186.30MB
> 
> thanks
> 
> - Original Message -
> *From:* "flamese...@yahoo.co.jp" 
> *To:* Aleksandar Lazic ; "haproxy@formilux.org"
> 
> *Date:* 2018/12/7, Fri 09:06
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi,
> 
> Thanks for the reply, I thought the mail format is corrupted..
> 
> I tried option http-pretend-keepalive, seems read error is gone, but 
> timeout
> error raised(maybe its because the 1000 connections of wrk)
> 
> Thanks
> 
> - Original Message -
> *From:* Aleksandar Lazic 
> *To:* flamese...@yahoo.co.jp; "haproxy@formilux.org" 
> 
> *Date:* 2018/12/6, Thu 23:53
> *Subject:* Re: Simply adding a filter causes read error
> 
> Hi.
> 
> Am 06.12.2018 um 15:20 schrieb flamese...@yahoo.co.jp
> :
> > Hi,
> >
> > I have a haproxy(v1.8.14) in front of several nginx backends,
> everything works
> > fine until I add compression in haproxy.
> 
> There is a similar thread about this topic.
> 
> https://www.mail-archive.com/haproxy@formilux.org/msg31897.html
> 
> Can you try to add this option in your config and see if the problem 
> is
> gone.
> 
> option http-pretend-keepalive
> 
> Regards
> Aleks
> 
> > My config looks like this:
> >
> > ### Config start #
> > global
> >     maxconn         100
> >     daemon
> >     nbproc 2
> >
> > defaults
> >     retries 3
> >     option redispatch
> >     timeout client  60s
> >     timeout connect 60s
> >     timeout server  60s
> >     timeout http-request 60s
> >     timeout http-keep-alive 60s
> >
> > frontend web
> >     bind *:8000
> >
> >     mode http
> >     default_backend app
> > backend app
> >     mode http
> >     #filter compression
> >     #filter trace 
> >     server nginx01 10.0.3.15:8080
> > ### Config end #
> >
> >
> > Lua script used in wrk:
> > a.lua:
> >
> > local count = 0
> >
> > request = function()
> >     local url = "/?count=" .. count
> >     count = count + 1
> >     return wrk.format(
> >     'GET',
> >     url
> >     )
> > end
> >
> >
> > 01. wrk test against nginx: everything if OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8080 
> > Running 10s test @ http://10.0.3.15:8080 
> >   2 threads and 1000 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency    34.83ms   17.50ms 260.52ms   76.48%
> >     Req/Sec    12.85k     2.12k   17.20k    62.63%
> >   255603 requests in 10.03s, 1.23GB read
> > Requests/sec:  25476.45
> > Transfer/sec:    125.49MB
> >
> >
> > 02. Wrk test against haproxy, no filters: everything is OK
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8000 
> > Running 10s test @ http://10.0.3.15:8000 
> >   2 threads and 1000 connections
> >   Thread Stats   Avg      Stdev     Max   +/- Stdev
> >     Latency    73.58ms  109.48ms   1.33s    97.39%
> >     Req/Sec     7.83k     1.42k   11.95k    66.15%
> >   155843 requests in 10.07s, 764.07MB read
> > Requests/sec:  15476.31
> > Transfer/sec:     75.88MB
> >
> > 03. Wrk test against haproxy, add filter compression: read error
> >
> > Change
> >
> >     #filter compression
> > ===>
> >     filter compression
> >
> > wrk -c 1000 -s a.lua http://10.0.3.15:8000 

Re: sample fetch: add bc_http_major

2018-12-07 Thread Aleksandar Lazic
Hi Jerome.

Am 07.12.2018 um 10:26 schrieb Jerome Magnin:
> Hi,
> 
> the attached patch adds bc_http_major. It returns the HTTP major encoding of 
> the
> backend connection, based on the the on-wire encoding. 

cool Idea ;-)

I suggest to use a dedicated function for that, jm2c.

{ "bc_http_major", smp_fetch_bc_http_major, 0, NULL, SMP_T_SINT, SMP_USE_L4SRV 
},


> Jérôme

Regards
aleks



Re: [PATCH] BUG/MEDIUM: Expose all converters & fetches

2018-12-07 Thread Willy Tarreau
Hello,

On Thu, Dec 06, 2018 at 11:36:33PM -0800, Robin H. Johnson wrote:
> One of my coworkers was having some trouble trying to escape data for
> JSON in Lua, using the 'json' converter, based on the documentation, and
> this lead to a deep bug discovery.
> 
> The Lua documentation [1] states that JSON escaping converter is exposed
> in Lua, but it turns out that's not quite true.
> 
> Creatively dumping the function metatable [2] (see code at the end)
> shows only a subset of converters exposed, and notable is missing at
> least the following as of
(...)
> All of these have in common that they have a validation of arguments.
> Those with a * gained validation of arguments in commit
> 5d86fae2344dbfacce5479ba86bd2d2866bf5474 (v1.6-dev2-52-g5d86fae23)
> 
> This bug has been around since the start of Lua fetchers & converters.
> 
> hlua_run_sample_conv is capable of running the args checker [3]:
> ```c
>  /* Run the special args checker. */
>  if (conv->val_args && !conv->val_args(args, conv, "", 0, NULL)) {
>   hlua_pusherror(L, "error in arguments");
>   WILL_LJMP(lua_error(L));
>  }
> ```
> 
> But any converters with arguments checking functions are not registered [4]:
> ```c
>  /* Dont register the keywork if the arguments check function are
>   * not safe during the runtime.
>   */
>  if (sc->val_args != NULL)
>   continue;
> ```
> 
> Fetchers have a similar issue, but some checkers were explicitly permitted:
> ```c
>  /* Dont register the keywork if the arguments check function are
>   * not safe during the runtime.
>   */
>  if ((sf->val_args != NULL) &&
>  (sf->val_args != val_payload_lv) &&
>   (sf->val_args != val_hdr))
>   continue;
> ```

Well, for me the reason here is clearly mentioned in the comments, which
is that only those explicitly whitelisted there are safe for use at runtime.
Maybe since then others became safe and should be added, but your patch
simply removes these tests and will lead to bad things happening at run
time for people using unsafe functions.

I had a quick look, some converters use check_operator() which creates
a variable upon each invocation of the parsing function. Some people
might inadvertently get caught by using these ones to look up cookie
values or session identifiers for example and not realize that zoombie
variables are lying behind. Another one is smp_check_const_bin() which
will call parse_binary(), returning an allocated memory block representing
the binary string, that nobody will free either. And for converters you
can see that all map-related functions use sample_load_map(), which will
attempt to load a file from the file system. Not only this must not be
attempted at run time for obvious performance reasons (will not work if
the config properly uses chroot anyway), but it may also use huge amounts
of memory on each call.

For the time being, I think that instead a solution could be to review
the various val_args functions to see which ones are safe and may be
whitelisted for use with Lua, and to comment at the top of these functions
that they have to remain safe for use at run time.

Thanks,
Willy



Re: [PATCH] REGTEST: Move LUA reg level 4 test 4 to level 1

2018-12-07 Thread Willy Tarreau
Hi Fred!

On Fri, Dec 07, 2018 at 11:25:51AM +0100, Frederic Lecaille wrote:
> I think that Pieter level 4 LUA 4 script should be moved to level 1 (as
> a feature test).

Good point, now applied, thanks!
Willy



[PATCH] REGTEST: Move LUA reg level 4 test 4 to level 1

2018-12-07 Thread Frederic Lecaille

Hi all,

I think that Pieter level 4 LUA 4 script should be moved to level 1 (as
a feature test).

Fred.
>From ac0188df083da4e240f87c34557cbf0ab9fd589d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Fri, 7 Dec 2018 11:16:35 +0100
Subject: [PATCH] REGTEST: Move LUA reg test 4 to level 1.

This Pieter script deserves to be moved to level 1 (feature test).
---
 reg-tests/lua/{b4.lua => h2.lua} | 0
 reg-tests/lua/{b4.vtc => h2.vtc} | 0
 2 files changed, 0 insertions(+), 0 deletions(-)
 rename reg-tests/lua/{b4.lua => h2.lua} (100%)
 rename reg-tests/lua/{b4.vtc => h2.vtc} (100%)

diff --git a/reg-tests/lua/b4.lua b/reg-tests/lua/h2.lua
similarity index 100%
rename from reg-tests/lua/b4.lua
rename to reg-tests/lua/h2.lua
diff --git a/reg-tests/lua/b4.vtc b/reg-tests/lua/h2.vtc
similarity index 100%
rename from reg-tests/lua/b4.vtc
rename to reg-tests/lua/h2.vtc
-- 
2.11.0



sample fetch: add bc_http_major

2018-12-07 Thread Jerome Magnin
Hi,

the attached patch adds bc_http_major. It returns the HTTP major encoding of the
backend connection, based on the the on-wire encoding. 

Jérôme
>From e0a28394ea2da5757c1e72773ab4c9fb97565a35 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Magnin?= 
Date: Fri, 7 Dec 2018 09:03:11 +0100
Subject: [PATCH] sample: add bc_http_major

---
 doc/configuration.txt | 5 +
 src/connection.c  | 4 +++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index e6678c17..18951f8c 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -14352,6 +14352,11 @@ table may be specified with the "sc*" form, in which 
case the currently
 tracked key will be looked up into this alternate table instead of the table
 currently being tracked.
 
+bc_http_major: integer
+  Returns the backend connection's HTTP major version encoding, which may be 1
+  for HTTP/0.9 to HTTP/1.1 or 2 for HTTP/2. Note, this is based on the on-wire
+  encoding and not the version present in the request header.
+
 be_id : integer
   Returns an integer containing the current backend's id. It can be used in
   frontends with responses to check which backend processed the request.
diff --git a/src/connection.c b/src/connection.c
index 054e1998..2b0063e6 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -1258,7 +1258,8 @@ int make_proxy_line_v2(char *buf, int buf_len, struct 
server *srv, struct connec
 static int
 smp_fetch_fc_http_major(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
 {
-   struct connection *conn = objt_conn(smp->sess->origin);
+   struct connection *conn = (kw[0] != 'b') ? objt_conn(smp->sess->origin) 
:
+   
smp->strm ? cs_conn(objt_cs(smp->strm->si[1].end)) : NULL;
 
smp->data.type = SMP_T_SINT;
smp->data.u.sint = (conn && strcmp(conn_get_mux_name(conn), "H2") == 0) 
? 2 : 1;
@@ -1293,6 +1294,7 @@ int smp_fetch_fc_rcvd_proxy(const struct arg *args, 
struct sample *smp, const ch
  */
 static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
{ "fc_http_major", smp_fetch_fc_http_major, 0, NULL, SMP_T_SINT, 
SMP_USE_L4CLI },
+   { "bc_http_major", smp_fetch_fc_http_major, 0, NULL, SMP_T_SINT, 
SMP_USE_L4SRV },
{ "fc_rcvd_proxy", smp_fetch_fc_rcvd_proxy, 0, NULL, SMP_T_BOOL, 
SMP_USE_L4CLI },
{ /* END */ },
 }};
-- 
2.19.2