help for implementation of first fetch function "sample_fetch_json_string"

2021-04-08 Thread Aleksandar Lazic

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.

Because I haven't implemented a fetch function until now it would be nice when 
somebody
helps me and point me into the right direction. Maybe I have overseen a 
documentation in
the doc directory.

Let's assume there is this haproxy liney.

```
# get the namespace from a bearer token
http-request set-var(sess.json) 
%[req.hdr(Authorization),b64dec,json_string("\$.kubernetes\\.io/serviceaccount/namespace")]
http-request return status 200 content-type text/plain lf-string 
%[date,ltime(%Y-%m-%d_%H-%M-%S)] hdr x-var "val=%[var(sess.json)]"
```

When I run this I get the following message. That's what I also don't 
understand because
I have added the "sample_fetch_json_string" into the "static struct 
sample_fetch_kw_list smp_kws = ...".

```
./haproxy -d -f ../test-haproxy.conf
[NOTICE] 097/170201 (1105377) : haproxy version is 2.4-dev15-909947-31
[NOTICE] 097/170201 (1105377) : path to executable is ./haproxy
[ALERT] 097/170201 (1105377) : parsing [../test-haproxy.conf:10] : error 
detected in frontend 'fe1' while parsing 'http-request set-var(sess.json)' rule 
: missing fetch method.
[ALERT] 097/170201 (1105377) : Error(s) found in configuration file : 
../test-haproxy.conf
```

I expect that the decoded json string is in args[0] and the 
"\$.kubernetes\\.io/serviceaccount/namespace" is in smp, is this assumption 
right?

You see I have some open question which I hope someone can answer it.

That's the function signature.

https://github.com/cesanta/mjson#mjson_get_string
// s, len is a JSON string [ "abc", "de\r\n" ]
int mjson_get_string(const char *s, int len, const char *path, char *to, int 
sz);

I think that this line isn't right, but what's the right one?

rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
args[1].data.str.area, tmp->area, tmp->size);

attached the WIP diff and the test config.

It's similar concept as the env fetch function.
What I don't know in which struct is what.


``` from sample.c
smp_fetch_env(const struct arg *args, struct sample *smp, const char *kw, void 
*private)

/* This sample function fetches the value from a given json string.
 * The mjson library is used to parse the json struct
*/
static int sample_fetch_json_string(const struct arg *args, struct sample *smp, 
const char *kw, void *private)
{
struct buffer *tmp;
int rc;

tmp = get_trash_chunk();
/* json stringjson string length 
search patternvalue value length
rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
"$.kubernetes\\.io/serviceaccount/namespace", tmp->area, tmp->size);
*/
rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
args[1].data.str.area, tmp->area, tmp->size);

smp->flags |= SMP_F_CONST;
smp->data.type = SMP_T_STR;
smp->data.u.str.area = tmp->area;
smp->data.u.str.data = tmp->data;
return 1;
}
```

Regards
Alex
diff --git a/Makefile b/Makefile
index 9b22fe4be..7f6998cdc 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o\
 src/ebistree.o src/auth.o src/wdt.o src/http_acl.o \
 src/hpack-enc.o src/hpack-huff.o src/ebtree.o src/base64.o \
 src/hash.o src/dgram.o src/version.o src/fix.o src/mqtt.o src/dns.o\
-src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o
+src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o   \
+src/mjson.o
 
 ifneq ($(TRACE),)
 OBJS += src/calltrace.o
@@ -946,6 +947,10 @@ dev/poll/poll:
 dev/tcploop/tcploop:
 	$(Q)$(MAKE) -C dev/tcploop tcploop CC='$(cmd_CC)' OPTIMIZE='$(COPTS)'
 
+dev/json/json: dev/json/json.o dev/json/mjson/src/mjson.o src/chunk.o
+	$(cmd_LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+	#$(Q)$(MAKE) -C dev/json json CC='$(cmd_CC)' OPTIMIZE='$(COPTS)'
+
 # rebuild it every time
 .PHONY: src/version.c
 
diff --git a/dev/json/test-data.json b/dev/json/test-data.json
new file mode 100644
index 0..fdda596e9
--- /dev/null
+++ b/dev/json/test-data.json
@@ -0,0 +1 @@
+{"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"openshift-logging","kubernetes.io/serviceaccount/secret.name":"deployer-token-m98xh","kubernetes.io/serviceaccount/service-account.name":"deployer","kubernetes.io/serviceaccount/service-account.uid":"35dddefd-3b5a-11e9-947c-fa163e480910","sub":"system:serviceaccount:openshift-logging:deployer"}
\ No newline at end of file
diff --git a/dev/json/test-data.json.base64 b/dev/json/test-data.json.base64
new file mode 100644
index 0..75cddd3ac
--- /dev/null
+++ b/dev/json/test-data.json.base64
@@ -0,0 +1 @@

Re: [HAP 2.4-dev] Quotes in str fetch sample

2021-04-08 Thread Aleksandar Lazic

Hi.

Never mind. I have send the header in base64 and decode it.

```shell
curl -vH 'Authorization: '$(< 
/datadisk/git-repos/haproxy/dev/json/test-data.json.base64 ) http://127.0.0.1:8080

*   Trying 127.0.0.1:8080...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.68.0
> Accept: */*
> Authorization: 
eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbG9nZ2luZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3llci10b2tlbi1tOTh4aCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZXBsb3llciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM1ZGRkZWZkLTNiNWEtMTFlOS05NDdjLWZhMTYzZTQ4MDkxMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpvcGVuc2hpZnQtbG9nZ2luZzpkZXBsb3llciJ9
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< x-var: 
json={"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"openshift-logging","kubernetes.io/serviceaccount/secret.name":"deployer-token-m98xh","kubernetes.io/serviceaccount/service-account.name":"deployer","kubernetes.io/serviceaccount/service-account.uid":"35dddefd-3b5a-11e9-947c-fa163e480910","sub":"system:serviceaccount:openshift-logging:deployer"}
 val=
< content-length: 10
< content-type: text/plain

```

```
http-request set-var(req.json)  req.hdr(Authorization),b64dec
http-request return status 200 content-type text/plain lf-string %[date] hdr x-var 
"json=%[var(req.json)] val=%[var(sess.json)]"

```

regards
alex

On 08.04.21 01:27, Aleksandar Lazic wrote:

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.
My current test looks good but I'm struggling with the test setup.

```
git-repos/haproxy$ ./haproxy -c -f ../test-haproxy.conf
[NOTICE] 097/012132 (1043229) : haproxy version is 2.4-dev15-8daf8d-30
[NOTICE] 097/012132 (1043229) : path to executable is ./haproxy
[ALERT] 097/012132 (1043229) : parsing [../test-haproxy.conf:9] : error 
detected in frontend 'fe1' while parsing
'http-request set-var(req.json)' rule : fetch method 'str' : expected ')' before 
',\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'.

[ALERT] 097/012132 (1043229) : Error(s) found in configuration file : 
../test-haproxy.conf

```

That's the config.
```
defaults
    mode http
    timeout connect 1s
    timeout client  1s
    timeout server  1s

frontend fe1
    bind "127.0.0.1:8080"
    http-request set-var(req.json)  
'str({\"iss\":\"kubernetes/serviceaccount\",\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'

```

I have tried several combos like.
str("...")
str('...')
str(...)
I have also added more '\' in the string.

But I get always the error above.

Any Idea how to fix the error?

Regards
Alex






[HAP 2.4-dev] Quotes in str fetch sample

2021-04-07 Thread Aleksandar Lazic

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.
My current test looks good but I'm struggling with the test setup.

```
git-repos/haproxy$ ./haproxy -c -f ../test-haproxy.conf
[NOTICE] 097/012132 (1043229) : haproxy version is 2.4-dev15-8daf8d-30
[NOTICE] 097/012132 (1043229) : path to executable is ./haproxy
[ALERT] 097/012132 (1043229) : parsing [../test-haproxy.conf:9] : error 
detected in frontend 'fe1' while parsing
'http-request set-var(req.json)' rule : fetch method 'str' : expected ')' before 
',\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'.

[ALERT] 097/012132 (1043229) : Error(s) found in configuration file : 
../test-haproxy.conf

```

That's the config.
```
defaults
mode http
timeout connect 1s
timeout client  1s
timeout server  1s

frontend fe1
bind "127.0.0.1:8080"
http-request set-var(req.json)  
'str({\"iss\":\"kubernetes/serviceaccount\",\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'

```

I have tried several combos like.
str("...")
str('...')
str(...)
I have also added more '\' in the string.

But I get always the error above.

Any Idea how to fix the error?

Regards
Alex



Re: zlib vs slz (perfoarmance)

2021-03-30 Thread Aleksandar Lazic

+1

On 30.03.21 08:17, Илья Шипицин wrote:

I would really like to know whether zlib was chosen for purpose or by chance.

And yes, some marketing campaign makes sense

On Tue, Mar 30, 2021, 10:35 AM Dinko Korunic mailto:dinko.koru...@gmail.com>> wrote:


 > On 29.03.2021., at 23:06, Lukas Tribus mailto:lu...@ltri.eu>> wrote:
 >

[…]

 > Like I said last year, this needs a marketing campaign:
 > https://www.mail-archive.com/haproxy@formilux.org/msg38044.html 

 >
 >
 > What about the docker images from haproxytech? Are those zlib or slz
 > based? Perhaps that would be a better starting point?
 >
 > https://hub.docker.com/r/haproxytech/haproxy-alpine 




Hi Lukas,

I am maintaining haproxytech Docker images and I can easily make that (slz 
being used) happen, if that’s what community would like to see.


Kind regards,
D.

-- 
Dinko Korunic                   ** Standard disclaimer applies **

Sent from OSF1 osf1v4b V4.0 564 alpha







Re: Is there a way to deactivate this "message repeated x times"

2021-03-29 Thread Aleksandar Lazic

On 29.03.21 18:55, Lukas Tribus wrote:

Hello,

On Mon, 29 Mar 2021 at 15:25, Aleksandar Lazic  wrote:


Hi.

I need to create some log statistics with awffull stats and I assume this 
messages
means that only one line is written for 3 requests, is this assumption right?

Mar 28 14:04:07 lb1 haproxy[11296]: message repeated 3 times: [ ::::49445 [28/Mar/2021:14:04:07.234] 
https-in~ be_api/api_prim 0/0/0/13/13 200 2928 - -  930/900/8/554/0 0/0 {|Mozilla/5.0 (Macintosh; Intel 
Mac OS X 10.13; rv:86.0) Gecko/20100101 Firefox/86.0||128|TLS_AES_128_GCM_SHA256|TLSv1.3|} "GET 
https:/// HTTP/2.0"]

Can this behavior be disabled?


This is not haproxy, this is your syslog server. Refer to the
documentation of the syslog server.


Oh yes of course, *clap on head*.

Looks like the RepeatedMsgReduction is on Ubuntu 18.04.5 LTS is this by default 
on.

https://www.rsyslog.com/doc/v8-stable/configuration/action/rsconf1_repeatedmsgreduction.html

I have solved it with this ansible snipplet.

```
- name: Deactivate RepeatedMsgReduction in rsyslog
  lineinfile:
backup: yes
line: $RepeatedMsgReduction off
path: /etc/rsyslog.conf
regexp: '^\$RepeatedMsgReduction on'
  tags: haproxy,all,syslog
  register: syslog

- name: Restart syslog
  service:
name: rsyslog
state: restarted
  when: syslog.changed
  tags: haproxy,all,syslog
```


Lukas


Regards
Alex




Is there a way to deactivate this "message repeated x times"

2021-03-29 Thread Aleksandar Lazic

Hi.

I need to create some log statistics with awffull stats and I assume this 
messages
means that only one line is written for 3 requests, is this assumption right?

Mar 28 14:04:07 lb1 haproxy[11296]: message repeated 3 times: [ ::::49445 [28/Mar/2021:14:04:07.234] 
https-in~ be_api/api_prim 0/0/0/13/13 200 2928 - -  930/900/8/554/0 0/0 {|Mozilla/5.0 (Macintosh; Intel 
Mac OS X 10.13; rv:86.0) Gecko/20100101 Firefox/86.0||128|TLS_AES_128_GCM_SHA256|TLSv1.3|} "GET 
https:/// HTTP/2.0"]

Can this behavior be disabled?

Regards
Alex



[HAP 2.3.8] some missunderstandint of Session state and server correlation

2021-03-27 Thread Aleksandar Lazic

Hi.

As I understand the LH and LR combo right should no server be involved.
I expected in the https-in line also a "" but there is the 
"be_default/default_prim".
Do I missunderstand the 'L' flag which is described as below

```
the session was locally processed by haproxy and was not passed to
a server. This is what happens for stats and redirects.
```

 is always the same.

Mar 27 13:33:35 lb1 haproxy[634]: :58572 [27/Mar/2021:13:33:35.713]
http-in http-in/ 0/-1/-1/-1/0 301 121 - - LR-- 964/2/0/0/0 0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)|}
"GET /robots.txt HTTP/1.1"

Mar 27 13:33:35 lb1 haproxy[634]: :58572 [27/Mar/2021:13:33:35.713]
http-in http-in/ 0/-1/-1/-1/0 301 121 - - LR-- 964/2/0/0/0 0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)|}
"GET /robots.txt HTTP/1.1"

Mar 27 13:33:35 lb1 haproxy[634]: ::::36964 
[27/Mar/2021:13:33:35.837]
https-in~ be_default/default_prim 0/0/44/-1/57 200 266 - - LH-- 971/946/2/2/0 
0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)||128|TLS_AES_128_GCM_SHA256|TLSv1.3|}
"GET /robots.txt HTTP/1.1"

Mar 27 13:33:35 lb1 haproxy[634]: ::::36964 
[27/Mar/2021:13:33:35.837]
https-in~ be_default/default_prim 0/0/44/-1/57 200 266 - - LH-- 971/946/2/2/0 
0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)||128|TLS_AES_128_GCM_SHA256|TLSv1.3|}
"GET /robots.txt HTTP/1.1"

It's haproxy 2.3.8 and this is are the frontend sections.

```
frontend http-in
  bind *:80

  http-request capture req.fhdr(Referer) len 128
  http-request capture req.fhdr(User-Agent) len 256
  http-request capture req.hdr(host) len 148
  http-request set-var(txn.req_path) path

  http-response return content-type text/plain string "User-agent: *\nAllow: 
/\n" if { var(txn.req_path) /robots.txt }
  http-response return status 404 if { var(txn.req_path) /sitemap.txt }

  acl host_redir hdr(host),map(/etc/haproxy/redirect.map) -m found
  http-request redirect code 301 location 
%[req.hdr(host),map(/etc/haproxy/redirect.map)] if host_redir

  http-request redirect code 301 location 
https://%[hdr(host)]%[capture.req.uri] if ! { path_beg 
/.well-known/acme-challenge/ }

  use_backend be_nginx if { path_beg /.well-known/acme-challenge/ }

frontend https-in

  bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file 
/etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/

  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  http-request deny if { src -f /etc/haproxy/denylist.acl }

  http-request set-var(txn.req_path) path

  http-response return content-type text/plain string "User-agent: *\nAllow: 
/\n" if { var(txn.req_path) /robots.txt }
  http-response return status 404 if { var(txn.req_path) /sitemap.txt }

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

  # collect ssl infos.
  http-request set-var(txn.cap_alg_keysize) ssl_fc_alg_keysize
  http-request set-var(txn.cap_cipher) ssl_fc_cipher
  http-request set-var(txn.cap_protocol) ssl_fc_protocol

  declare capture request len 128
  declare capture request len 256
  declare capture request len 148
  declare capture request len 148
  declare capture request len 148
  declare capture request len 148

  http-request capture req.hdr(host) len 148

  # Add CORS response header
  acl is_cors_preflight method OPTIONS
  http-response add-header Access-Control-Allow-Origin "*" if is_cors_preflight
  http-response add-header Access-Control-Allow-Methods "GET,POST" if 
is_cors_preflight
  http-response add-header Access-Control-Allow-Credentials "true" if 
is_cors_preflight
  http-response add-header Access-Control-Max-Age "600" if is_cors_preflight

  # 
https://www.haproxy.com/blog/haproxy-and-http-strict-transport-security-hsts-header-in-http-redirects/
  http-response set-header Strict-Transport-Security "max-age=15768000; 
includeSubDomains"
  http-response set-header X-Frame-Options  "SAMEORIGIN"
  http-response set-header X-Xss-Protection "1; mode=block"
  http-response set-header X-Content-Type-Options   "nosniff"
  http-response set-header Referrer-Policy  "origin-when-cross-origin"

  use_backend be_nginx if { path_beg /.well-known/acme-challenge/ }
  use_backend 
%[req.hdr(host),lower,map(/etc/haproxy/haproxy_backend.map,be_default)]

```

regards
Alex



Re: [HAP 2.3.8] Is there a way to see why "" and "SSL handshake failure" happens

2021-03-27 Thread Aleksandar Lazic

On 27.03.21 12:01, Lukas Tribus wrote:

Hello,

On Sat, 27 Mar 2021 at 11:52, Aleksandar Lazic  wrote:


Hi.

I have a lot of such entries in my logs.

```
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""


Use show errors on the admin socket:
https://cbonte.github.io/haproxy-dconv/2.0/management.html#9.3-show%20errors


Thanks.


Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure


That's currently a pain point:

https://github.com/haproxy/haproxy/issues/693


Thanks.


Lukas



Regards
Alex



[HAP 2.3.8] Is there a way to see why "" and "SSL handshake failure" happens

2021-03-27 Thread Aleksandar Lazic

Hi.

I have a lot of such entries in my logs.

```
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""

Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
```

Is there an easy way to see why this happens?

```
root@lb1:~# haproxy -vv
HA-Proxy version 2.3.8-1ppa1~bionic 2021/03/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.8.html
Running on: Linux 4.15.0-139-generic #143-Ubuntu SMP Tue Mar 16 01:30:17 UTC 
2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-ot86Gj/haproxy-2.3.8=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value 
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 
USE_SYSTEMD=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER 
+PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with the Prometheus exporter as a service
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
```

Regards
Alex



Which mode for Quic?

2021-03-02 Thread Aleksandar Lazic

Hi.

I assume that QUIC is a dedicated mode right?

Something like
   h3 : mode=QUIC   side=FE|BE mux=H3


```
Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS
```

Regards
Aleks



Re: Setting up haproxy for tomcat SSL Valve

2021-02-25 Thread Aleksandar Lazic

On 25.02.21 07:38, Jarno Huuskonen wrote:

Hi,

On Thu, 2021-02-25 at 03:24 +0100, Aleksandar Lazic wrote:

Hi.

I try to setup HAProxy (precisely  OpenShift Router :-)) to send the TLS/SSL
Client
Information's to tomcat.

On the SSL Valve page are the following parameters available.

http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#SSL_Valve

SSL_CLIENT_CERT string  PEM-encoded client certificate
?

The only missing parameter is "SSL_CLIENT_CERT in PEM format". There is one
in DER Format
ssl_c_der in HAProxy but the code in SSL-Valve expects the PEM format.

https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/valves/SSLValve.java#L125

Have I overseen something in the HAProxy code or doc or isn't there
currently an option to get
the  client certificate out of HAProxy in PEM format?


It should be possible (had this working years ago):
(https://www.mail-archive.com/haproxy@formilux.org/msg20883.html
http://shibboleth.net/pipermail/users/2015-July/022674.html)

Something like:
http-request add-header X-SSL-Client-Cert -BEGIN\ CERTIFICATE-\
%[ssl_c_der,base64]\ -END\ CERTIFICATE-\ # don't forget last space


Cool thanks.


-Jarno



Best regards
Alex



Setting up haproxy for tomcat SSL Valve

2021-02-24 Thread Aleksandar Lazic

Hi.

I try to setup HAProxy (precisely  OpenShift Router :-)) to send the TLS/SSL 
Client
Information's to tomcat.

On the SSL Valve page are the following parameters available.

http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#SSL_Valve

```
sslClientCertHeader:
Allows setting a custom name for the ssl_client_cert header. If not specified, 
the default
of "ssl_client_cert" is used.

sslCipherHeader:
Allows setting a custom name for the ssl_cipher header. If not specified, the 
default
of "ssl_cipher" is used.

sslSessionIdHeader:
Allows setting a custom name for the ssl_session_id header. If not specified, 
the default
of "ssl_session_id" is used.

sslCipherUserKeySizeHeader:
Allows setting a custom name for the ssl_cipher_usekeysize header. If not 
specified, the
default of "ssl_cipher_usekeysize" is used.
```

I have found some corresponding variables on the mod_ssl page and the HAProxy 
samples, at
least I hope I found the right one on HAProxy site.

https://httpd.apache.org/docs/current/mod/mod_ssl.html#envvars

SSL_CLIENT_CERT string  PEM-encoded client certificate
?

SSL_CIPHER  string  The cipher specification name
http-request set-header ssl_cipher   %[ssl_fc_cipher]
http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.4-ssl_fc_cipher

SSL_SESSION_ID  string  The hex-encoded SSL session id
http-request set-header ssl_session_id %[ssl_fc_session_id,hex]
http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.4-ssl_fc_session_id

SSL_CIPHER_USEKEYSIZE   number  Number of cipher bits (actually used)
http-request set-header ssl_cipher_usekeysize %[ssl_fc_alg_keysize]
http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.4-ssl_fc_alg_keysize

The only missing parameter is "SSL_CLIENT_CERT in PEM format". There is one in 
DER Format
ssl_c_der in HAProxy but the code in SSL-Valve expects the PEM format.

https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/valves/SSLValve.java#L125

Have I overseen something in the HAProxy code or doc or isn't there currently 
an option to get
the  client certificate out of HAProxy in PEM format?

Regards
Alex



Re: Apache Proxypass mimicing ?

2021-02-22 Thread Aleksandar Lazic

Hi.

On 22.02.21 01:31, Igor Cicimov wrote:


But if I do some configuration tweaks in "wp-config.php", like adding the 
following two lines :

define('WP_HOME', 'https://front1.domain.local 
');
define('WP_SITEURL', 'https://front1.domain.local 
');

It seems to work correctly.

It is not an acceptable solution however, as these WP instances will be 
managed by people who are not really tech-savvy.


So I wonder if HAProxy could provide a setup with all the required modifications, 
rewritings, ... allowing both worlds to coexist in a transparent way :

- usable WP site while browsing the "real" URLs from the backend
- usable WP site while browsing through HAProxy.

Right now WP is my concern, but I am sure this is a reusable "pattern" for 
future needs.

Regards


This is a requirement for most apps behind a reverse proxy -- you simply have to 
tell the app that it is behind a reverse proxy so it can set correct links where needed.


In your case if you google for "wordpress behind reverse proxy" I'm sure you'll 
get a ton of resources that can point you in the right direction for your use 
case like using X-FORWARD headers for example or whatever suits you.


Full Ack to Igor's statment.

A a further Idea maybe you can replace the response.
http://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4.2-http-response%20replace-header
http://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4.2-http-response%20replace-value

It could be tricky for a huge amount of hosts, due to this fact I suggest to
setup WP with WP_HOME and WP_SITEURL which is possible via wp-admin via GUI :-)

You can also create a smal setup tool which adds the values to the wp_config and
add the haproxy map entry for the domain.

Regards
Alex



Vote for us as RegTech Partner of 
the Year at the British Bank Awards!

Know Your Customer due diligence on demand, powered by intelligent process 
automation


Blogs   | LinkedIn 
  | Twitter 



Encompass Corporation UK Ltd  | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL

Encompass Corporation Pty Ltd  | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000

This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information.

If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.

If received in error, please notify us immediately by return email and destroy 
the original message.









Re: Apache Proxypass mimicing ?

2021-02-18 Thread Aleksandar Lazic

HI.

On 18.02.21 10:12, spfma.t...@e.mail.fr wrote:

Hi,
I would like to setup a reverse proxy with SSL termination to allow something 
like :



https://front1.domain proxying http://back1.otherdomain:8000 (and maybe one day 
back2)
https://front2.domain proxying http://back3.otherdomain:5000

>

Common things I already configured using Apache's mod_proxy.
I am not an HAProxy expert, I only used it in tcp mode for simple and efficient 
load balancing.


I would suggest to take a look into the following articles.

https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/
https://www.haproxy.com/blog/introduction-to-haproxy-maps/

I have read this very interresting article https://www.haproxy.com/fr/blog/howto-write-apache-proxypass-rules-in-haproxy/ 
but it seems directives belong to former versions, and I was not able to get the expected result.

>

One of my important use-case is Apache backends hosting WordPress.
There are numerous examples here and there, but I always end with URLs like https://front1.domain/wp-admin 
redirected to http://front1.domain:8000/wp-admin or https://back1.otherdomain:8000/wp-admin aso ...

>
I know WP is redirecting to URLs related to it's configured URLs , so I guess some 
headers rewriting are required, but I don't know how to do that.
I am looking for a generic way to perform the required rewrites, without depending 
on fixed URL patterns. Is it only possible with HAProxy ? Some very old posts 
suggested it was not, but there were from around nine years ago.
I have not been able to find answers so far (some search results show appealing 
descriptions but sites are not responding) so I am looking for some help here.


Well you will need some pattern that the computer can follow.

For example based on which criteria should a program know what it should to on 
the URL?

Request: https://front1.domain/wp-admin

Redirect to http://front1.domain:8000/wp-admin when what happen?
Send request to https://back1.otherdomain:8000/wp-admin when what happen?

I would start with that config 
https://github.com/Tyrell66/SoHo/blob/master/haproxy-2020.05.02.cfg

Here a slightly adopted version.


```
frontend http-in
  bind *:80

# Prevent DDoS
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }

  http-request add-header X-Forwarded-Proto http
  redirect scheme https if !{ ssl_fc }


frontend https-in
# /etc/haproxy/certs/ contains both .pem for default and second domain 
names.
  bind *:443 ...

http-response replace-header Location ^http://(.*)$ https://\1
http-request add-header X-Forwarded-Proto https

http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Port 443
capture request header X-Forwarded-For len 15

# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

## Secure headers 
https://blog.devcloud.hosting/securing-haproxy-and-nginx-via-http-headers-54020d460283
## Test your config with https://securityheaders.com/
## and https://observatory.mozilla.org/

http-response set-header X-XSS-Protection 1;mode=block
http-response set-header X-Content-Type-Options nosniff
http-response set-header Referrer-Policy no-referrer-when-downgrade
http-response set-header X-Frame-Options SAMEORIGIN
http-response del-header X-Powered-By
http-response del-header Server


  # This line is for HSTS:
  http-response set-header Strict-Transport-Security "max-age=63072000; 
includeSubdomains; preload;"


  use_backend %[req.hdr(host),lower,map(hosts.map,be_static)]

backend be_static
  server default_static xxx.xxx.xx

backend be_domain1
http-request replace-uri ^/gc/(.*) /guacamole/\1
  server host1  192.168.1.13:58080/guacamole/#

...

```

file hosts.map
```
front1.domain be_domain1
front2.domain be_domain2

```

You can also set maps for path and host with ports.
As you can see HAProxy should be able to full fill your requirement as long as 
you can
define it for you and the program/Computer ;-)

Maybe this article could also help you to protect the WP installations for 
attacks.
https://www.haproxy.com/blog/wordpress-cms-brute-force-protection-with-haproxy/


Thanks


Welcome

Alex



[PATCH] DOC/MINOR: ROADMAP: adopt the Roadmap to the current state

2021-02-05 Thread Aleksandar Lazic

Hi.

attached a patch for the Roadmap.

There is also the bandwidth limitation as open entry due
to this fact I assume it's not easy to handle bandwith
limitation within haproxy.

Regards
Aleks
>From 8a77687ca480feb286fd394d533570b079d4be27 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 5 Feb 2021 11:09:35 +0100
Subject: [PATCH] DOC/MINOR: ROADMAP: adopt the Roadmap to the current state

---
 ROADMAP | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/ROADMAP b/ROADMAP
index a797b84eb..67cd7edaa 100644
--- a/ROADMAP
+++ b/ROADMAP
@@ -3,9 +3,10 @@ Medium-long term wish list - updated 2019/06/15
 Legend: '+' = done, '-' = todo, '*' = done except doc
 
 2.1 or later :
-  - return-html code xxx [ file "xxx" | text "xxx" ] if 
-
-  - return-raw  [ file "xxx" | text "xxx" ] if 
+  + return-html code xxx [ file "xxx" | text "xxx" ] if 
+  + return-raw  [ file "xxx" | text "xxx" ] if 
+Both are available since 2.2
+https://www.haproxy.com/blog/announcing-haproxy-2-2/#native-response-generator
 
   - have multi-criteria analysers which subscribe to req flags, rsp flags, and
 stream interface changes. This would result in a single analyser to wait
@@ -34,8 +35,10 @@ Legend: '+' = done, '-' = todo, '*' = done except doc
   - add support for event-triggered epoll, and maybe change all events handling
 to pass through an event cache to handle temporarily disabled events.
 
-  - evaluate the changes required for multi-process+shared mem or multi-thread
+  + evaluate the changes required for multi-process+shared mem or multi-thread
 +thread-local+fast locking.
+HAProxy uses Threads since 1.8. In 2.0 is the theading enhanced.
+https://www.haproxy.com/blog/haproxy-2-0-and-beyond/#cloud-native-threading-logging
 
 Old, maybe obsolete points :
   - clarify licence by adding a 'MODULE_LICENCE("GPL")' or something equivalent.
@@ -61,7 +64,7 @@ Unsorted :
 
   - random cookie generator
 
-  - fastcgi to servers
+  + fastcgi to servers. Available since 2.1 https://www.haproxy.com/blog/haproxy-2-1/#fastcgi
 
   - hot config reload
 
@@ -73,4 +76,4 @@ Unsorted :
 
   - dynamic weights based on check response headers and traffic response time
 
-  - various kernel-level acceleration (multi-accept, ssplice, epoll2...)
+  + various kernel-level acceleration (multi-accept, ssplice, epoll2...)
-- 
2.25.1



Re: HAProxy ratelimit based on bandwidth

2021-02-05 Thread Aleksandar Lazic

On 26.01.21 20:27, Aleksandar Lazic wrote:

Hi.

On 26.01.21 05:54, Sangameshwar Babu wrote:
 > Hello Team,
 >
 > I would like to get some suggestions on setting up ratelimit on HAProxy 1.8 
version,
 > my current setup is as below.
 >
 > 1000+ rsyslog clients(TCP) -> HAProxy (TCP mode) -> backend centralized 
rsyslog server.
 >
 > I have the below stick table and acl's through which I am able to mark a 
source as
 > "abuse" if the client crosses the limit post which all new connections from 
the
 > same client are rejected until stick table timer expires.
 >
 > haproxy.cfg
 > -
 >  stick-table type ip size 200k expire 2m store 
gpc0,conn_rate(2s),bytes_in_rate(1s),bytes_in_cnt
 >
 >  acl data_rate_abuse  sc1_bytes_in_rate ge 100
 >  acl data_size_abuse  sc1_kbytes_in ge 1
 >
 > tcp-request connection silent-drop if data_rate_abuse
 >  tcp-request connection reject if data_size_abuse
 >
 > However I would like to configure in such a way that once a client sends 
about
 > "x bytes" of data the connection should be closed instantly instead of 
marking it
 > abuse and simultaneous connections being rejected.

+1
I have a similar issue and hope that we get suggestions to get a answer here.

 > Kindly let me know if the above can be configured with HAProxy version 1.8.

I will need it for 2.2+


Looks like this feature is not yet available when I look into the roadmap.

There is a "bandwidth limits" entry.
http://git.haproxy.org/?p=haproxy.git;a=blob;f=ROADMAP;h=a797b84eb95298807cefa03edaa69583d8007c5b;hb=HEAD#l22

I have seen there also some points which are already implemented therefore I 
will send a patch to update the roadmap.


 > BR
 > Sangam


Regards
Aleks




Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-29 Thread Aleksandar Lazic

On 29.01.21 12:27, Christopher Faulet wrote:

Le 22/01/2021 à 07:08, Willy Tarreau a écrit :

On Thu, Jan 21, 2021 at 11:09:33PM +0100, Aleksandar Lazic wrote:

On 21.01.21 21:57, Christopher Faulet wrote:

Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?



Hi,

It is not possible right now. But it will be very very soon. Amaury implemented 
the
H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
tricky fixes on the tunnel management that must be carefully reviewed. It is a
nightmare to support all tunnel combinations. But I've almost done the review. I
must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
will do my best to push it very soon. Anyway, it will be a feature for the 2.4.


Wow that sounds really great. Thank you for your answer.


And by the way initially we thought we'd backport Amaury's work to 2.3,
but give the dependency with the tunnel stuff that opened this pandora
box, now I'm pretty sure we won't :-)

One nice point is that he managed to natively support the WS handshake,
it's not just a blind tunnel anymore, so that it's possible to have WS
using either H1 or H2 on the frontend, and either H1 or H2 on the backend.
Now we're really seeing the benefits of HTX because while at each extremity
we have a very specific WS handshake, in the middle we just have a tunnel
using a WS protocol, which allows a CONNECT on one side to become a GET on
the other side.

As Christopher said, the tunnel changes are extremely complicated because
these uncovered some old limitations at various levels, and each time we
reviewed the pending changes we could imagine a situation where an odd use
case would break if we don't recursively go into another round of refactoring
at yet another deeper level. But we're on the right track now, things start
to look good.



FYI, the HTTP/2 websockets support is now available and will be part of the 
next 2.4-dev release (2.4-dev7)


Cool thanks.



Re: HAProxy ratelimit based on bandwidth

2021-01-26 Thread Aleksandar Lazic

Hi.

On 26.01.21 05:54, Sangameshwar Babu wrote:
> Hello Team,
>
> I would like to get some suggestions on setting up ratelimit on HAProxy 1.8 
version,
> my current setup is as below.
>
> 1000+ rsyslog clients(TCP) -> HAProxy (TCP mode) -> backend centralized 
rsyslog server.
>
> I have the below stick table and acl's through which I am able to mark a 
source as
> "abuse" if the client crosses the limit post which all new connections from 
the
> same client are rejected until stick table timer expires.
>
> haproxy.cfg
> -
>  stick-table type ip size 200k expire 2m store 
gpc0,conn_rate(2s),bytes_in_rate(1s),bytes_in_cnt
>
>  acl data_rate_abuse  sc1_bytes_in_rate ge 100
>  acl data_size_abuse  sc1_kbytes_in ge 1
>
> tcp-request connection silent-drop if data_rate_abuse
>  tcp-request connection reject if data_size_abuse
>
> However I would like to configure in such a way that once a client sends about
> "x bytes" of data the connection should be closed instantly instead of 
marking it
> abuse and simultaneous connections being rejected.

+1
I have a similar issue and hope that we get suggestions to get a answer here.

> Kindly let me know if the above can be configured with HAProxy version 1.8.

I will need it for 2.2+

> BR
> Sangam

Regards
Aleks



Re: Question about substring match (*_sub)

2021-01-23 Thread Aleksandar Lazic

On 23.01.21 07:36, Илья Шипицин wrote:

the following usually works for performance profiling.


1) setup work stand (similar to what you use in production)

2) use valgrind + callgrind for collecting traces

3) put workload

4) aggregate using kcachegrind

most probably you were going to do very similar things already :)


Thanks for the tips ;-)

The issue here is that for sub-string matching are several parameters
important like pattern, pattern length, text, text length and the
alphabet.

My question was focused to hear some "common" setups to be able to
create some valid tests for the different algorithms to compare it.

I think something like the examples below. As I don't used _sub
in the past it's difficult to me alone to create some valid use
cases which are used out there. It's okay to send examples only
to me, just in case for some security or privacy reasons.

acl allow_from_int hdr(x-forwarded-for) hdr_sub("192.168.4.5")
acl admin_access   hdr(user)hdr_sub("admin")
acl test_url   path urlp_sub("test=1")

Should UTF-* be considered as valid Alphabet or only ASCII?

If _sub is a very rare case then it's okay as it is, isn't it?

Opinions?


сб, 23 янв. 2021 г. в 03:18, Aleksandar Lazic mailto:al-hapr...@none.at>>:

Hi.

I would like to take a look into the substring match implementation because 
of
the comment there.


http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767
 
<http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767>

"NB: Suboptimal, should be rewritten using a Boyer-Moore method."

Now before I take a deeper look into the different algorithms about 
sub-string
match I would like to know which pattern and length is a "common" use case
for the user here?

There are so many different algorithms which are mostly implemented in the
Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
interesting to know some metrics about the use cases.

Thanks for sharing.
Best regards

Aleks






Question about substring match (*_sub)

2021-01-22 Thread Aleksandar Lazic

Hi.

I would like to take a look into the substring match implementation because of
the comment there.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767

"NB: Suboptimal, should be rewritten using a Boyer-Moore method."

Now before I take a deeper look into the different algorithms about sub-string
match I would like to know which pattern and length is a "common" use case
for the user here?

There are so many different algorithms which are mostly implemented in the
Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
interesting to know some metrics about the use cases.

Thanks for sharing.
Best regards

Aleks



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Aleksandar Lazic

On 21.01.21 21:57, Christopher Faulet wrote:
> Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :
>> Hi.
>>
>> I'm not sure if I have missed something, because there are so many great 
features
>> now in HAProxy, therefore I just ask here.
>>
>> Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy 
now?
>>
>
> Hi,
>
> It is not possible right now. But it will be very very soon. Amaury 
implemented the
> H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
> tricky fixes on the tunnel management that must be carefully reviewed. It is a
> nightmare to support all tunnel combinations. But I've almost done the 
review. I
> must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
> will do my best to push it very soon. Anyway, it will be a feature for the 
2.4.

Wow that sounds really great. Thank you for your answer.

Regards
Aleks



Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Aleksandar Lazic

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?

Regards

Aleks



When to add HAProxy to QUIC Implementations Wiki

2021-01-13 Thread Aleksandar Lazic

Hi.

When I look into the quicwg site then I miss HAProxy there ;-)

https://github.com/quicwg/base-drafts/wiki/Implementations

When do you think is the best time to add HAProxy there?

Regards

Aleks



Re: Clean up "type: feature" in the tracker

2021-01-10 Thread Aleksandar Lazic

On 11.01.21 00:32, John Traweek CCNA, Sec+ wrote:

unsubscribe


You can unsubscribe you self from the list.
https://www.haproxy.org/#tact

Regards
Aleks


On 1/10/21, 10:03 AM, "Tim Düsterhus"  wrote:

 Hi List,
 Willy,
 Lukas,

 as of right now feature requests make up almost half of the open issues
 in the issue tracker (102 / 226). When looking through them I'm seeing a
 few that are probably not going to be implemented any time soon (if ever).

 An example would be "cache-aware server push" (#31):
 https://github.com/haproxy/haproxy/issues/31. Support for H2 pushes is
 being removed from web browsers, so it probably is not helpful taking
 the time to implement that.

 Another would be the generic UDP LB support (#62):
 https://github.com/haproxy/haproxy/issues/62.

 Maybe it makes sense to look through the list of currently open feature
 requests and close the ones that are very unlikely to be implemented (or
 unlikely to be implemented in the next few years) to clean up the
 tracker a bit.

 Closed requests are still there and can be found using the search, so
 the previous discussion will not be lost.

 The current list of feature requests can be found here:

 
https://github.com/haproxy/haproxy/issues?q=is%3Aissue+is%3Aopen+label%3A%22type%3A+feature%22

 Best regards
 Tim Düsterhus







Re: Content inspection using tcp-request/tcp-response content send-spoa-group

2020-11-24 Thread Aleksandar Lazic

Hi.

On 24.11.20 11:48, Stanislav Pavlíček wrote:

Hello,

I'm trying to implement content inspection using haproxy/SPOE and SPOA agent.

I created basic sample configuration to demonstrate my issue:

https://github.com/haproxy/haproxy/issues/956#issuecomment-732806414 

To reproduce locally, just download contentdebug.zip archive from link above, 
run it using docker-compose up and hit it with curl (e.g. curl -d '{}' http://localhost ).


The issue is that although I declared tcp-request/tcp-reponse content 
send-spoa-group rules, my SPOA agent is called only once with request length 0 and no payload.


I have downloaded the zip and see that you use the "contrib/spoa_server"
which have some issues which have Christopher Faulet explained in this post
https://www.mail-archive.com/haproxy@formilux.org/msg38484.html

I suspect I don't fully understand processing of tcp-request/tcp-response 
rules, acls and accept/reject criteria. I tried to add various acls mainly 
based on req.len/res.len, which I thought could be used to detect end of payload 
(The documentation says that req.len/res.len returns false when no more data is 
available), but still no luck.


My goal is to send every chunk of data read/written on given proxy to SPOA agent. 
Ideally I would like to avoid any buffering, which I thought I could achieve using 
https://www.arpalert.org/src/haproxy-lua-api/2.2/index.html#Channel.forward  (not used in my example).


Is it feasible? Or do I need to implement my own filter?


As far as I know there is no other scriptable spoa solution for now.
You can try to fix the issues for spoa_server or build your solution based on
contrib/spoa_example for example.

contrib/modsecurity looks like that is based on the spoa_example ;-)



This is really important for the project I am working on.

Thanks for any help.

Regards,
Stanislav Pavlicek


Regards
Aleks



Re: [2.2.5] High cpu usage after switch to threads

2020-11-19 Thread Aleksandar Lazic

Tim.

Cool big thank to clarify that for me.

Regards
Aleks

On 19.11.20 17:03, Tim Düsterhus wrote:

Aleks,

Am 19.11.20 um 16:53 schrieb Aleksandar Lazic:

When a H2 client send the header in lowercase then and h1 in mixed-case
could the "del-header"
line not match when it's only written in lowercase or mixed-case .

HTTP headers are defined to be case-insensitive. You quoted it yourself:


Just as in HTTP/1.x, header field names are strings of ASCII
characters that are compared in a case-insensitive fashion.

"as in HTTP/1.x"

Being able to request "case insensitive matching" with -i is redundant
and confusing for the administrator.

Personally I define all my headers within ACLs in a lowercase fashion
and that works for whatever casing the client wants to use today.


I think that was also one of the reason why the h1-case-adjust* feature
exists ;-)

The "feature" exists to support broken software that pretends to speak
HTTP when it in fact does not.

Best regards
Tim Düsterhus





Re: [2.2.5] High cpu usage after switch to threads

2020-11-19 Thread Aleksandar Lazic

Hi.

On 19.11.20 16:16, Maciej Zdeb wrote:

Hi,

Alaksandar I've looked into code and... :)


Great ;-)


śr., 18 lis 2020 o 15:30 Aleksandar Lazic mailto:al-hapr...@none.at>> napisał(a):

Can you think to respectthe '-i'.

http://git.haproxy.org/?p=haproxy.git=search=HEAD=grep=PAT_MF_IGNORE_CASE

I'm not sure if I understand you correctly, but in case of http-request del-header the "case 
insensitivity" must be always enabled, because header names should be case insensitive 
according to RFC. So we should not implement "-i" flag in this scenario.


Well in H2 are the headers in lowercase as far as I understand this part of H2 
RFC in a proper way.
But I'm not the expert in H2 so please correct me if I'm wrong.

Hypertext Transfer Protocol Version 2 (HTTP/2)
https://tools.ietf.org/html/rfc7540#section-8.1.2
```
8.1.2.  HTTP Header Fields

...
   Just as in HTTP/1.x, header field names are strings of ASCII
   characters that are compared in a case-insensitive fashion.  However,
   header field names MUST be converted to lowercase prior to their
   encoding in HTTP/2.
...

```

When a H2 client send the header in lowercase then and h1 in mixed-case could the 
"del-header"
line not match when it's only written in lowercase or mixed-case .

I think that was also one of the reason why the h1-case-adjust* feature exists 
;-)


Additional Info.

What I have see in the the checking of '-i' (PAT_MF_IGNORE_CASE), the '-m 
reg' functions
have not the  PAT_MF_IGNORE_CASE check.

I think you're looking at the regex execution, but flags are considered during regex compile. 
If you look at regex_comp function http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/regex.c;h=45a7e9004e8e4f6ad9604ed9a858aba0060b6204;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l312  you'll notice cs param. Function is called in pattern.c and couple other places. In my opinion -i with -m reg is perfectly valid and should work.


Cool great, I thought I missed something.
Thanks for the clarification.

Regards
Aleks



Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Aleksandar Lazic

Hi Maciej.

On 18.11.20 14:22, Maciej Zdeb wrote:
I've found an earlier discussion about replacing reqidel (and others) in 2.x: https://www.mail-archive.com/haproxy@formilux.org/msg36321.html 


So basically we're lacking:
http-request del-header x-private-  -m beg
http-request del-header x-.*company -m reg
http-request del-header -tracea     -m end

I'll try to implement it in the free time.


If I'm allowed to raise a wish, even I know and respect your time and your 
passion.

Can you think to respectthe '-i'.
http://git.haproxy.org/?p=haproxy.git=search=HEAD=grep=PAT_MF_IGNORE_CASE

Additional Info.

What I have see in the the checking of '-i' (PAT_MF_IGNORE_CASE), the '-m reg' 
functions
have not the  PAT_MF_IGNORE_CASE check.

Maybe I'm wrong but is the '-i' respected by '-m reg' pattern, because I don't 
see the
'icase' variable in this functions or any other check for PAT_MF_IGNORE_CASE 
flag.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l569
struct pattern *pat_match_regm

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l596
struct pattern *pat_match_reg

This both functions uses 'regex_exec_match2()' where I also don't see the 
PAT_MF_IGNORE_CASE check
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/regex.c;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l217

I have never used '-i' with regex so maybe it's a magic in the code which I 
don't recognize.

Regards
Aleks


śr., 18 lis 2020 o 13:20 Maciej Zdeb mailto:mac...@zdeb.pl>> 
napisał(a):

Sure, the biggest problem is to delete header by matching prefix:

load_blacklist = function(service)
     local prefix = '/etc/haproxy/configs/maps/header_blacklist'
     local blacklist = {}

     blacklist.req = {}
     blacklist.res = {}
     blacklist.req.str = Map.new(string.format('%s_%s_req.map', prefix, 
service), Map._str)
     blacklist.req.beg = Map.new(string.format('%s_%s_req_beg.map', prefix, 
service), Map._beg)

     return blacklist
end

blacklist = {}
blacklist.testsite = load_blacklist('testsite')

is_denied = function(bl, name)
     return bl ~= nil and (bl.str:lookup(name) ~= nil or 
bl.beg:lookup(name) ~= nil)
end

req_header_filter = function(txn, service)
         local req_headers = txn.http:req_get_headers()
         for name, _ in pairs(req_headers) do
                 if is_denied(blacklist[service].req, name) then
                         txn.http:req_del_header(name)
                 end
         end
end

core.register_action('req_header_filter', { 'http-req' }, 
req_header_filter, 1)

śr., 18 lis 2020 o 12:46 Julien Pivotto mailto:roidelapl...@inuits.eu>> napisał(a):

On 18 Nov 12:33, Maciej Zdeb wrote:
 > Hi again,
 >
 > So "# some headers manipulation, nothing different then on other 
clusters"
 > was the important factor in config. Under this comment I've hidden 
from you
 > one of our LUA scripts that is doing header manipulation like 
deleting all
 > headers from request when its name begins with "abc*". We're doing 
it on
 > all HAProxy servers, but only here it has such a big impact on the 
CPU,
 > because of huge RPS.
 >
 > If I understand correctly:
 > with nbproc = 20, lua interpreter worked on every process
 > with nbproc=1, nbthread=20, lua interpreter works on single 
process/thread
 >
 > I suspect that running lua on multiple threads is not a trivial 
task...

If you can share your lua script maybe we can see if this is doable
more natively in haproxy

 >
 >
 >
 >
 > wt., 17 lis 2020 o 15:50 Maciej Zdeb mailto:mac...@zdeb.pl>> napisał(a):
 >
 > > Hi,
 > >
 > > We're in a process of migration from HAProxy[2.2.5] working on 
multiple
 > > processes to multiple threads. Additional motivation came from the
 > > announcement that the "nbproc" directive was marked as deprecated 
and will
 > > be killed in 2.5.
 > >
 > > Mostly the migration went smoothly but on one of our clusters the 
CPU
 > > usage went so high that we were forced to rollback to nbproc. 
There is
 > > nothing unusual in the config, but the traffic on this particular 
cluster
 > > is quite unusual.
 > >
 > > With nbproc set to 20 CPU idle drops at most to 70%, with nbthread 
= 20
 > > after a couple of minutes at idle 50% it drops to 0%. HAProxy
 > > processes/threads are working on dedicated/isolated CPU cores.
 > >
 > > [image: image.png]
 > >
 > > I mentioned that traffic is quite unusual, because most of it are 
http
 > > requests with some payload in headers and very very small 
responses (like
 

Re: Integration of modsecurity v3 with haproxy

2020-11-13 Thread Aleksandar Lazic

On 10.11.20 17:52, Thomas SIMON wrote:
> Hi all,
>
> Is there a way to use some mecanism (spoe or other) to use modsecurity v3
> with haproxy (2.x) ?
> I found documentation on modsecurity v2 integration with spoe , but nothing
> on v3.
>
> My goal is to protect backends with modsecurity using owasp CRS.
>
> I've setup a nginx with modsecurity v3 on another server, and l'd like to 
proxy
> requests on this server for filtering before processing authorized traffic on 
backends.

Well v3 is a complete different beast, afaik.

Just a wild Idea.

You can try to use the same concept as Tim in the auth proxy lua as long as
no v3 implementation is available.
https://github.com/TimWolla/haproxy-auth-request

The "Auth Server" is the modsecurity server.

I have take a look into the v3 but it's not a easy task, so I don't know how and
when I can proceed with some sort of implementation.

> best regards
> thomas

Hth
Alex



Re: Count uniq Client ips

2020-10-15 Thread Aleksandar Lazic

Tim.

On 15.10.20 19:05, Tim Düsterhus wrote:

Aleks,

Am 15.10.20 um 14:08 schrieb Aleksandar Lazic:

The target is to know how much concurrent IP's request the a specific URL.


What *exactly* would you like to extract? Do you actually want
concurrent IP addresses? Log parsing then would be impossible by definition.


I need to know how many concurrent clients request *NOW* a specific URL and 
display
it in prometheus and limit access to max client let's say 50 per url.

That's my requirement.

Agree that the logfile is the wrong way to get this information's.



Best regards
Tim Düsterhus


Regards
Aleks



Re: Count uniq Client ips

2020-10-15 Thread Aleksandar Lazic

Hi Adis,

On 15.10.20 15:03, Adis Nezirovic wrote:

On 10/15/20 2:08 PM, Aleksandar Lazic wrote:

Hi.

I though maybe the peers could help me when I yust add the client IP 
with the URL but I'm not sure if I can query the peers store in a efficient way.


The target is to know how much concurrent IP's request the a specific URL.

Could lua be a solution.


Hey Aleks,

I'm not sure Lua would be the right solution for your situation, counting stuff 
is tricky.


Hm so you mean that lua could be a performance bottleneck for youtube scale ?
As I haven't used lua in haproxy or nginx I have no experience how it behaves 
on high
traffic sites.

I thought to use something like this but "proc" wide

function action(txn)
  -- Get source IP
  local clientip = txn.f:src()
  local url  = txn.sf:path_beg("/MY_URL")

  save_in_global_hash(clientip+url)
end

and query this save_in_global_hash with a service.

However, I think Redis has INCR, you you can store per URL counters and maybe (just maybe) 
use Lua action in HAProxy to write to Redis.


Obviously, you'd need to look out for performance, added latency etc, but it 
would be a start.
You can then access Redis outside of the HAProxy context and observe the 
counters.


Maybe the stick tables could also be a solution because I use it already for 
limiting access.

```
  # 
https://www.haproxy.com/blog/application-layer-ddos-attack-protection-with-haproxy/
  http-request track-sc0 src table per_ip_rates
```
# table: per_ip_rates, type: ip, size:1048576, used:3918

0x7f3c58fa9620: key= use=0 exp=597470 http_req_rate(1)=1

0x7f3c4d299960: key= use=0 exp=588433 http_req_rate(1)=2
0x7f3c50cc8830: key= use=0 exp=241004 http_req_rate(1)=0
0x7f3c5c6b3eb0: key= use=0 exp=586046 http_req_rate(1)=1
...
```

Can i add there a URL part like path_beg("/MYURL")


Just my 2c, hope it helps you (like you helped many people on this list)


Thank you for your input.


Best regards,





Count uniq Client ips

2020-10-15 Thread Aleksandar Lazic
Hi.

I have a quite tricky requirement and hope to get some input for a efficient 
solution.

I use a haproyx in front of a streaming server.

The access log, in json format, writes out the http request to syslog which is 
this plugin 
https://github.com/influxdata/telegraf/tree/release-1.14/plugins/inputs/syslog

Now I tried with 
https://github.com/influxdata/telegraf/tree/release-1.14/plugins/processors/dedup
 to get unique IP's but that's quite unprecise.

I though maybe the peers could help me when I yust add the client IP with the 
URL but I'm not sure if I can query the peers store in a efficient way.

The target is to know how much concurrent IP's request the a specific URL.

Could lua be a solution.

Thanks for any ideas.

Best regards
Aleks



Re: [PR] SOCKS4(A)

2020-10-03 Thread Aleksandar Lazic

Hi.

On 02.10.20 13:54, Christopher Faulet wrote:

Le 02/10/2020 à 08:58, Willy Tarreau a écrit :


So if anyone currently uses socks4 to talk to servers, I suggest you
run a quick test on 2.2 or 2.3 to see if health checks continue to work
over socks4 or not, in which case it's likely you'll be able to provide
an easier reproducer that will allow to fix the problem. This will save
everyone time and protect our eyeballs by keeping them away from this
blinking patch.


There is indeed a bug. The flag CO_FL_SOCKS4 is set after the connect() 
for tcp-checks, making the health-checks though a socks4 proxy fail. 
Here is a patch to fix this bug. I will push it very soon.


Remains the support of the SOCKS4A in Alex patches. But I will let anyone 
motivated by this part working on it :)


I was curious how much the patch really change and have took the time to
adopt the patch.

What I have seen is that he current code mixes tabs and white spaces also
the goto intentions. I think this is because of the evolution of the code ;-)

Attached the patch which I have cleaned from the formatings.
The code builds but I have not tested it if it works.

The main patch is 001 the other two are just to remove double definition
of functions.

Regards
Aleks

>From 249f3e2467f3957e4af786829d5bb585de7f2df9 Mon Sep 17 00:00:00 2001
From: Alex 
Date: Sun, 4 Oct 2020 03:09:13 +0200
Subject: [PATCH 3/3] Sock4(A) original patch from alex v3

---
 include/haproxy/connection.h | 34 --
 1 file changed, 34 deletions(-)

diff --git a/include/haproxy/connection.h b/include/haproxy/connection.h
index e06de3c67..2189ab8c1 100644
--- a/include/haproxy/connection.h
+++ b/include/haproxy/connection.h
@@ -150,40 +150,6 @@ static inline void conn_prepare_new_for_socks4(struct connection *conn, struct s
 	}
 }
 
-static inline void conn_set_domain(struct connection *conn, const char *domain)
-{
-	conn_free_domain(conn);
-	if (domain) {
-		size_t len = strlen(domain) + 1;
-		conn->requested_domain = malloc(len);
-		if (!conn->requested_domain) {
-			/* TODO: Handle malloc error */
-		}
-		
-		memcpy(conn->requested_domain, domain, len);
-	}
-}
-
-static int is_server_fake_address(struct server *srv)
-{
-	return (srv->flags & SRV_F_SOCKS4_PROXY_FAILED_RESOLVE);
-}
-
-static inline void conn_set_domain_from_server(struct connection *conn, struct server *srv)
-{
-	if (is_server_fake_address(srv))
-		conn_set_domain(conn, srv->hostname);
-}
-
-static inline void conn_prepare_new_for_socks4(struct connection *conn, struct server *srv)
-{
-	if (srv && (srv->flags & SRV_F_SOCKS4_PROXY)) {
-		conn->send_proxy_ofs = 1;
-		conn->flags |= CO_FL_SOCKS4;
-		conn_set_domain_from_server(conn, srv);
-	}
-}
-
 /* Calls the close() function of the transport layer if any and if not done
  * yet, and clears the CO_FL_XPRT_READY flag. However this is not done if the
  * CO_FL_XPRT_TRACKED flag is set, which allows logs to take data from the
-- 
2.25.1

>From 9f9f4c33d15b3c15b37d54152b141bfc0e345ed2 Mon Sep 17 00:00:00 2001
From: Alex 
Date: Sun, 4 Oct 2020 03:00:29 +0200
Subject: [PATCH 2/3] Sock4(A) original patch from alex v2

---
 include/haproxy/connection.h | 9 -
 1 file changed, 9 deletions(-)

diff --git a/include/haproxy/connection.h b/include/haproxy/connection.h
index c1eec84de..e06de3c67 100644
--- a/include/haproxy/connection.h
+++ b/include/haproxy/connection.h
@@ -150,15 +150,6 @@ static inline void conn_prepare_new_for_socks4(struct connection *conn, struct s
 	}
 }
 
-static inline void conn_free_domain(struct connection *conn)
-{
-	if (conn->requested_domain)
-	{
-		free(conn->requested_domain);
-		conn->requested_domain = NULL;
-	}
-}
-
 static inline void conn_set_domain(struct connection *conn, const char *domain)
 {
 	conn_free_domain(conn);
-- 
2.25.1

>From 294196a787f7345d42553d531d39906ef7a7468b Mon Sep 17 00:00:00 2001
From: Alex 
Date: Sun, 4 Oct 2020 02:34:15 +0200
Subject: [PATCH 1/3] Sock4(A) original patch from alex

---
 include/haproxy/connection-t.h |  3 +-
 include/haproxy/connection.h   | 90 ++
 include/haproxy/fake_host.h| 24 +
 include/haproxy/server-t.h |  1 +
 src/backend.c  | 17 +++
 src/connection.c   | 80 +-
 src/server.c   | 30 
 src/tcpcheck.c |  3 ++
 8 files changed, 215 insertions(+), 33 deletions(-)
 create mode 100644 include/haproxy/fake_host.h

diff --git a/include/haproxy/connection-t.h b/include/haproxy/connection-t.h
index 9caa2ca49..669f97c12 100644
--- a/include/haproxy/connection-t.h
+++ b/include/haproxy/connection-t.h
@@ -490,7 +490,8 @@ struct connection {
 	struct sockaddr_storage *dst; /* destination address (pool), when known, otherwise NULL */
 	char *proxy_authority;	  /* Value of authority TLV received via PROXYv2 */
 	uint8_t proxy_authority_len;  /* Length of authority TLV received via PROXYv2 */
-	struct 

Re: Dynamic Googlebot identification via lua?

2020-09-08 Thread Aleksandar Lazic

On 08.09.20 22:54, Tim Düsterhus wrote:

Reinhard,
Björn,

Am 08.09.20 um 21:39 schrieb Björn Jacke:

the only official supported way to identify a google bot is to run a
reverse DNS lookup on the accessing IP address and run a forward DNS
lookup on the result to verify that it points to accessing IP address
and the resulting domain name is in either googlebot.com or google.com
domain.
...


thanks for asking this again, I brought this up earlier this year and I
got no answer:

https://www.mail-archive.com/haproxy@formilux.org/msg37301.html

I would expect that this is something that most sites would actually
want to check and I'm surprised that there is no solution for this or at
least none that is obvious to find.


The usually recommended solution for this kind of checks is either Lua
or the SPOA, running the actual logic out of process.

For Lua my haproxy-auth-request script is a batteries included solution
to query an arbitrary HTTP service:
https://github.com/TimWolla/haproxy-auth-request. It comes with the
drawback that Lua runs single-threaded within HAProxy, so you might not
want to use this if the checks need to run in the hot path, handling
thousands of requests per second.

It should be possible to cache the results of the script using a stick
table or a map.

Back in nginx times I used nginx' auth_request to query a local service
that checked whether the client IP address was a Tor exit node. It
worked well.

For SPOA there's this random IP reputation service within the HAProxy
repository:
https://github.com/haproxy/haproxy/tree/master/contrib/spoa_example. I
never used the SPOA feature, so I can't comment on whether that example
generally works and how hard it would be to extend it. It certainly
comes with the restriction that you are limited to C or Python (or a
manual implementation of the SPOA protocol) vs anything that speaks HTTP.


In addition to Tim's answer you can also try to use spoa_server which
supports `-n `.
https://github.com/haproxy/haproxy/tree/master/contrib/spoa_server


Best regards
Tim Düsterhus


Regards
Aleks



Re: stable-bot: Bugfixes waiting for a release 2.2 (18), 2.1 (13), 2.0 (8), 1.8 (6)

2020-08-19 Thread Aleksandar Lazic

On 19.08.20 11:42, Willy Tarreau wrote:

Hi Aleks,

On Wed, Aug 19, 2020 at 11:32:13AM +0200, Aleksandar Lazic wrote:

Please can the following patch also be considered to be backported.

OPTIM: startup: fast unique_id allocation for acl.
http://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=f91ac19299fe216a793ba6550dca06b688b31549


Backported to which ones ? It's already in 2.2, 2.1 and 2.0, as well
as the fix that Tim brought later since this patch alone caused some
accidental breakage.

If you're thinking about 1.8, I'm really not fond of backporting
optimizations that far at the risk of breaking well-working setups.
One of the reasons we now have a faster release cycle is to make sure
users have plenty of choice of versions with a very limited risk of
breakage, and that in order to maintain this we also avoid backporting
what is not essential. And if you want my opinion, I think we still
backport too much too far... (or at least too fast).


Yep. I thought to 1.8 and I understand the reason why it was not backported.
Thanks for answer.


Regards,
Willy






Re: stable-bot: Bugfixes waiting for a release 2.2 (18), 2.1 (13), 2.0 (8), 1.8 (6)

2020-08-19 Thread Aleksandar Lazic

On 19.08.20 02:00, stable-...@haproxy.com wrote:

Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.


Last release 2.2.2 was issued on 2020-07-31.  There are currently 18 patches in 
the queue cut down this way:
 - 1 MAJOR, first one merged on 2020-08-05
 - 6 MEDIUM, first one merged on 2020-08-05
 - 11 MINOR, first one merged on 2020-08-05

Thus the computed ideal release date for 2.2.3 would be 2020-08-19, which was 
within the last week.

Last release 2.1.8 was issued on 2020-07-31.  There are currently 13 patches in 
the queue cut down this way:
 - 4 MEDIUM, first one merged on 2020-08-05
 - 9 MINOR, first one merged on 2020-08-11

Thus the computed ideal release date for 2.1.9 would be 2020-09-04, which is in 
three weeks or less.

Last release 2.0.17 was issued on 2020-07-31.  There are currently 8 patches in 
the queue cut down this way:
 - 4 MEDIUM, first one merged on 2020-08-05
 - 4 MINOR, first one merged on 2020-08-11

Thus the computed ideal release date for 2.0.18 would be 2020-10-04, which is 
in seven weeks or less.

Last release 1.8.26 was issued on 2020-08-03.  There are currently 6 patches in 
the queue cut down this way:
 - 2 MEDIUM, first one merged on 2020-08-05
 - 4 MINOR, first one merged on 2020-08-03


Please can the following patch also be considered to be backported.

OPTIM: startup: fast unique_id allocation for acl.
http://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=f91ac19299fe216a793ba6550dca06b688b31549



Thus the computed ideal release date for 1.8.27 would be 2020-10-26, which is 
in ten weeks or less.

The current list of patches in the queue is:
  - 2.2   - MAJOR   : dns: disabled servers through SRV 
records never recover
  - 2.2   - MEDIUM  : ssl: fix the ssl-skip-self-issued-ca 
option
  - 1.8, 2.0  - MEDIUM  : mux-h2: Don't fail if nothing is 
parsed for a legacy chunk response
  - 2.2   - MEDIUM  : ssl: never generates the chain from 
the verify store
  - 2.0, 2.1, 2.2 - MEDIUM  : mux-h1: Refresh H1 connection timeout 
after a synchronous send
  - 2.0, 2.1, 2.2 - MEDIUM  : htx: smp_prefetch_htx() must always 
validate the direction
  - 2.1, 2.2  - MEDIUM  : ssl: memory leak of ocsp data at 
SSL_CTX_free()
  - 1.8, 2.0, 2.1, 2.2- MEDIUM  : map/lua: Return an error if a map 
is loaded during runtime
  - 2.1, 2.2  - MINOR   : arg: Fix leaks during arguments 
validation for fetches/converters
  - 1.8, 2.0, 2.1, 2.2- MINOR   : lua: Check argument type to 
convert it to IP mask in arg validation
  - 2.1, 2.2  - MINOR   : ssl: fix memory leak at OCSP loading
  - 2.0, 2.1, 2.2 - MINOR   : snapshots: leak of snapshots on 
deinit()
  - 1.8, 2.0, 2.1, 2.2- MINOR   : stats: use strncmp() instead of 
memcmp() on health states
  - 2.2   - MINOR   : ssl: ssl-skip-self-issued-ca requires 
>= 1.0.2
  - 2.1, 2.2  - MINOR   : lua: Duplicate map name to load it 
when a new Map object is created
  - 2.2   - MINOR   : spoa-server: fix size_t format 
printing
  - 1.8, 2.0, 2.1, 2.2- MINOR   : lua: Check argument type to 
convert it to IPv4/IPv6 arg validation
  - 1.8   - MINOR   : dns: ignore trailing dot
  - 2.1, 2.2  - MINOR   : converters: Store the sink in an arg 
pointer for debug() converter
  - 2.1, 2.2  - MINOR   : lua: Duplicate lua strings in sample 
fetches/converters arg array






haproxy <> dataplane

2020-08-15 Thread Aleksandar Lazic

Hi.

Afaik are there several ways to run haproxy and dataplane together.

Because I just start work with the dataplane, is there a "best practice" or 
"recommended" way to use these both component together?
Maybe someone can share some expirinede with the combination of haproxy and 
dataplane.

Best regards

Aleks



QUIC-LB: Generating Routable QUIC Connection IDs

2020-07-26 Thread Aleksandar Lazic

Hi.

Have you seen this Draft?

https://datatracker.ietf.org/doc/draft-ietf-quic-load-balancers/

Because there are a lot of QUIC Drafts there and 2.2 is released it would be 
nice
to get some update about the QUIC state in HAProxy ;-).

https://datatracker.ietf.org/doc/search/?name=QUIC=on=on

Best Regards

Aleks



[PATCH] DOC/MINOR: haproxy: Add description which delimiter is used for h1-case-adjust-file

2020-07-15 Thread Aleksandar Lazic

Hi.

This patch is a proposal to add the to the doc the delimiter for 
h1-case-adjust-file.

Regards

Aleks

>From d1b1061a54bb254c722cdfc984cde3466eabf5a1 Mon Sep 17 00:00:00 2001
From: Alex 
Date: Wed, 15 Jul 2020 21:31:18 +0200
Subject: [PATCH] DOC/MINOR: haproxy: Add description which delimiter is used for
 h1-case-adjust-file

---
 doc/configuration.txt | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 2a4672b05..6c2e9f4b6 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -907,14 +907,15 @@ h1-case-adjust  
   "option h1-case-adjust-bogus-server".
 
 h1-case-adjust-file 
-  Defines a file containing a list of key/value pairs used to adjust the case
-  of some header names before sending them to HTTP/1 clients or servers. The
-  file  must contain 2 header names per line. The first one must be
-  in lower case and both must not differ except for their case. Lines which
-  start with '#' are ignored, just like empty lines. Leading and trailing tabs
-  and spaces are stripped. Duplicate entries are not allowed. Please note that
-  no transformation will be applied unless "option h1-case-adjust-bogus-client"
-  or "option h1-case-adjust-bogus-server" is specified in a proxy.
+  Defines a file containing a list of key/value pairs separated by spaces 
+  used to adjust the case of some header names before sending them to HTTP/1 
+  clients or servers. The file  must contain 2 header names per 
+  line separated by spaces. The first one must be in lower case and both must 
+  not differ except for their case. Lines which start with '#' are ignored, 
+  just like empty lines. Leading and trailing tabs and spaces are stripped. 
+  Duplicate entries are not allowed. Please note that no transformation will be
+  applied unless "option h1-case-adjust-bogus-client" or 
+  "option h1-case-adjust-bogus-server" is specified in a proxy.
 
   If this directive is repeated, only the last one will be processed.  It is an
   alternative to the directive "h1-case-adjust" if a lot of header names need
-- 
2.20.1



Re: Documentation

2020-07-11 Thread Aleksandar Lazic

On 11.07.20 13:11, Tofflan wrote:

Hello!

Im trying to setup a setup HAProxy on my Pfsense router, the links under 
documentation dont work. example: 
https://cbonte.github.io/haproxy-dconv/2.3/intro.html and 
https://cbonte.github.io/haproxy-dconv/2.3/configuration.html
Is there anyway to read or download them somewhere?


The html version is created from the doc/configuration.txt file,
this file should be on all installed haproxy servers because it
should be part of the haproxy package.


Sincerely Fille


Regards
Aleks



Re: [PATCH 1/2] MEDIUM: ssl: Support certificate chaining for certificate generation

2020-07-06 Thread Aleksandar Lazic

Should a blank be after '%s'?

+   memprintf(err, "%sthis version of openssl cannot attach certificate chain 
for SSL certificate generation.\n",
+ err && *err ? *err : "");

On 05.07.20 14:09, Gersner wrote:

That's my fault. I was aware of the versioning but forgot to wrap in ifdef 
there.
Configuration prevents from setting those settings on unsupported versions.


On Sun, Jul 5, 2020 at 2:57 PM Илья Шипицин mailto:chipits...@gmail.com>> wrote:

https://cirrus-ci.com/task/6191727960653824

seems, openssl-1.0.0 (used in CentOS6/RHEL6) does not support those methods.

haproxy claims to support openssl starting 0.9.8, I guess openssl-0.9.8 is 
rarely tested

вс, 5 июл. 2020 г. в 16:48, Gersner mailto:gers...@gmail.com>>:

Awesome. I will run the manual tests on the variants later today.
Thanks.

On Sun, Jul 5, 2020 at 2:45 PM Илья Шипицин mailto:chipits...@gmail.com>> wrote:

if you have tested your code (I'm sure you did), maybe manual 
testing will be simple enough
you just need to rebuild haproxy against LibreSSL, BoringSSL, older 
openssl

examples how to build ssl lib and build haproxy against it might be 
taken from .travis.yml (I was about to write an article, but I'm lazy)

вс, 5 июл. 2020 г. в 16:16, Gersner mailto:gers...@gmail.com>>:

Oh, wasn't aware of that.
Is there some automation to test this or should I manually 
verify this?


On Sun, Jul 5, 2020 at 2:13 PM Илья Шипицин mailto:chipits...@gmail.com>> wrote:

I recall some issues with LibreSSL and chaining trust. Like 
it was declared but never worked.
we'll see that in runtime if there are such issues

вс, 5 июл. 2020 г. в 16:06, Илья Шипицин mailto:chipits...@gmail.com>>:

nice, all ssl variants build well

https://travis-ci.com/github/chipitsine/haproxy/builds/174323866

вс, 5 июл. 2020 г. в 15:48, Gersner mailto:gers...@gmail.com>>:



On Sun, Jul 5, 2020 at 1:42 PM Илья Шипицин 
mailto:chipits...@gmail.com>> wrote:

do you have your patches on github fork ?
(I could not find your fork)

Yes. See branch 
https://github.com/Azure/haproxy/tree/wip/sgersner/ca-sign-extra


вс, 5 июл. 2020 г. в 15:13, Gersner mailto:gers...@gmail.com>>:



On Sun, Jul 5, 2020 at 12:28 PM Илья Шипицин 
mailto:chipits...@gmail.com>> wrote:

does it clearly applies to current 
master ? either gmail scrambled patch or it is not.
can you try please ?

Exporting the eml and running 'git am' it 
works cleanly.

I've reproduced the exact same output when 
copy-pasting from gmail. It seems gmail converts the tabs to spaces and this 
fails the patch (Not sure why).
Running patch with '-l' will resolve this, 
but it's probably safer to run git am on the email.


$ patch -p1 < 1.patch
patching file doc/configuration.txt
patching file 
include/haproxy/listener-t.h
Hunk #1 FAILED at 163.
1 out of 1 hunk FAILED -- saving 
rejects to file include/haproxy/listener-t.h.rej
patching file src/cfgparse-ssl.c
Hunk #1 succeeded at 538 with fuzz 1.
Hunk #2 FAILED at 1720.
1 out of 2 hunks FAILED -- saving 
rejects to file src/cfgparse-ssl.c.rej
patching file src/ssl_sock.c
Hunk #1 FAILED at 1750.
Hunk #2 FAILED at 1864.
Hunk #3 FAILED at 1912.
Hunk #4 FAILED at 1943.
Hunk #5 FAILED at 1970.
Hunk #6 FAILED at 4823.
Hunk #7 FAILED at 4843.
7 out of 7 hunks FAILED -- saving 
rejects to file src/ssl_sock.c.rej

вс, 5 июл. 2020 г. в 11:46, mailto:gers...@gmail.com>>:

From: Shimi Gersner mailto:sgers...@microsoft.com>>

haproxy supports generating SSL 
certificates based on SNI using a provided
   

Re: Rate Limit per IP with queueing (delay)

2020-06-08 Thread Aleksandar Lazic
Sorry send to early. Now the full answer.

On 08.06.20 14:39, Aleksandar Lazic wrote:
> On 08.06.20 14:28, Stefano Tranquillini wrote:
>> Hi thanks for the reply
>>
>> why the set-priority is a better choice? 
>> will it just limit the connection in case there's need while it 
>> will not limit the connection per se?
>> i mean, if the system is capable of supporting 600 calls, with 
>> the set priority it will still process the 600 calls rather than 
>> limit the user to a max of 100 per minute

Well, as far as I know have hapoxy not the feature to "delay" a
connection except to move it in the request queue.
My idea is to move the requests to different queue which will be
handled after the other requests are handled.

I don't know if this will work, it's just a idea.

Regards
Aleks

> 
>> On Mon, Jun 8, 2020 at 1:27 PM Aleksandar Lazic > <mailto:al-hapr...@none.at>> wrote:
>>
>> On 08.06.20 09:15, Stefano Tranquillini wrote:
>> >
>> >
>> > On Sun, Jun 7, 2020 at 11:11 PM Илья Шипицин > <mailto:chipits...@gmail.com> <mailto:chipits...@gmail.com 
>> <mailto:chipits...@gmail.com>>> wrote:
>> >
>> >
>> >
>> >     вс, 7 июн. 2020 г. в 19:59, Stefano Tranquillini > <mailto:stef...@chino.io> <mailto:stef...@chino.io 
>> <mailto:stef...@chino.io>>>:
>> >
>> >         Hello all,
>> >
>> >         I'm moving to HA using it to replace NGINX and I've a question 
>> regarding how to do a Rate Limiting in HA that enables queuing the requests 
>> instead of closing them.
>> >
>> >         I was able to limit per IP following those examples: 
>> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/ . 
>> However, when the limit is reached, the users see the error and connection 
>> is closed.
>> >
>> >         Since I come from NGINX, it has this handy feature 
>> https://www.nginx.com/blog/rate-limiting-nginx/ where connections that 
>> exceed the threshold are queued. Thus the user will still be able to do the 
>> calls but be delayed without him getting errors and keep the overall number 
>> of requests within threshold.
>> >
>> >         Is there anything similar in HA? It should limit/queueing the 
>> user by IP.
>> >
>> >         To explain with an example, we have two users |Alice|, with ip 
>> |A.A.A.A| and |Bob| with ip |B.B.B.B| The threshold is |30r/minute|.
>> >
>> >         So in 1 minute:
>> >
>> >           * Alice does 20 requests. -> that's fine
>> >           * Bob does 60 requests. -> the system caps the requset to 30 
>> and then process the other 30 later on (maybe also adding timeout/delay)
>> >           * Alice does 50 request -> the first 40 are fine, the next 
>> 10 are queued.
>> >           * Bob does 20 requests -> they are queue after the one above.
>> >
>> >         I saw that it can be done in general, by limiting the 
>> connections per host. But this will mean that it's cross IP and thus, if 500 
>> is the limit
>> >         - Alice  does 1 call
>> >         - Bob does 1000 calls
>> >         - Alice does another 1 call
>> >         - Alice will be queued, that's not what i would like to have.
>> >
>> >         is this possible? Is there anything similar that can be done?
>> >
>> >
>> >     it is not cross IP.  I wish nginx docs would be better on that.
>> >
>> > What do you mean?
>> > in nginx i do
>> > limit_req_zone $binary_remote_addr zone=prod:10m rate=40r/m;
>> > and works
>> >
>> >     first, in nginx terms it is limited by zone key. you can define 
>> key using for example $binary_remote_addr$http_user_agent$ssl_client_ciphers
>> >     that means each unique combination of those parameters will be 
>> limited by its own counter (or you can use nginx maps to construct such a 
>> zone key)
>> >
>> >     in haproxy you can see and example of
>> >
>> >     # Track client by base32+src (Host header + URL path + src IP)
>> >
>> >     http-requesttrack-sc0 base32+src
>> >
>> >     which also means key definition may be as flexible as you can 
>> imagine.
>> >
>> >
>> &

Re: Rate Limit per IP with queueing (delay)

2020-06-08 Thread Aleksandar Lazic
On 08.06.20 14:28, Stefano Tranquillini wrote:
> Hi thanks for the reply
>
> why the set-priority is a better choice? 
> will it just limit the connection in case there's need while it 
> will not limit the connection per se?
> i mean, if the system is capable of supporting 600 calls, with 
> the set priority it will still process the 600 calls rather than 
> limit the user to a max of 100 per minute

Well as far as I know have hapox not the feauture to "delay" a
connecstion except to move it in the request questue

> On Mon, Jun 8, 2020 at 1:27 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> 
> On 08.06.20 09:15, Stefano Tranquillini wrote:
> >
> >
> > On Sun, Jun 7, 2020 at 11:11 PM Илья Шипицин  <mailto:chipits...@gmail.com> <mailto:chipits...@gmail.com 
> <mailto:chipits...@gmail.com>>> wrote:
> >
> >
> >
> >     вс, 7 июн. 2020 г. в 19:59, Stefano Tranquillini  <mailto:stef...@chino.io> <mailto:stef...@chino.io 
> <mailto:stef...@chino.io>>>:
> >
> >         Hello all,
> >
> >         I'm moving to HA using it to replace NGINX and I've a question 
> regarding how to do a Rate Limiting in HA that enables queuing the requests 
> instead of closing them.
> >
> >         I was able to limit per IP following those examples: 
> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/ . 
> However, when the limit is reached, the users see the error and connection is 
> closed.
> >
> >         Since I come from NGINX, it has this handy feature 
> https://www.nginx.com/blog/rate-limiting-nginx/ where connections that exceed 
> the threshold are queued. Thus the user will still be able to do the calls 
> but be delayed without him getting errors and keep the overall number of 
> requests within threshold.
> >
> >         Is there anything similar in HA? It should limit/queueing the 
> user by IP.
> >
> >         To explain with an example, we have two users |Alice|, with ip 
> |A.A.A.A| and |Bob| with ip |B.B.B.B| The threshold is |30r/minute|.
> >
> >         So in 1 minute:
> >
> >           * Alice does 20 requests. -> that's fine
> >           * Bob does 60 requests. -> the system caps the requset to 30 
> and then process the other 30 later on (maybe also adding timeout/delay)
> >           * Alice does 50 request -> the first 40 are fine, the next 10 
> are queued.
> >           * Bob does 20 requests -> they are queue after the one above.
> >
> >         I saw that it can be done in general, by limiting the 
> connections per host. But this will mean that it's cross IP and thus, if 500 
> is the limit
> >         - Alice  does 1 call
> >         - Bob does 1000 calls
> >         - Alice does another 1 call
> >         - Alice will be queued, that's not what i would like to have.
> >
> >         is this possible? Is there anything similar that can be done?
> >
> >
> >     it is not cross IP.  I wish nginx docs would be better on that.
> >
> > What do you mean?
> > in nginx i do
> > limit_req_zone $binary_remote_addr zone=prod:10m rate=40r/m;
> > and works
> >
> >     first, in nginx terms it is limited by zone key. you can define key 
> using for example $binary_remote_addr$http_user_agent$ssl_client_ciphers
> >     that means each unique combination of those parameters will be 
> limited by its own counter (or you can use nginx maps to construct such a 
> zone key)
> >
> >     in haproxy you can see and example of
> >
> >     # Track client by base32+src (Host header + URL path + src IP)
> >
> >     http-requesttrack-sc0 base32+src
> >
> >     which also means key definition may be as flexible as you can 
> imagine.
> >
> >
> > the point is, how can i cap the number of requests for a single user to 
> 40r/minute for example? or any number.
> >
> > What I was able to do is to slow it down in this way, but it does not 
> ensure the cap per request, it only adds 500ms to each call.
> >
> > frontend proxy
> >     bind *:80
> >     # ACL function declarations
> >     acl is_first_level src_http_req_rate(Abuse) ge 30 
> >     use_backend api_delay if is_first_level
> >     use_backend api
> >
> > backend api
> >     server api01 api01:8

Re: Rate Limit per IP with queueing (delay)

2020-06-08 Thread Aleksandar Lazic
On 08.06.20 09:15, Stefano Tranquillini wrote:
> 
> 
> On Sun, Jun 7, 2020 at 11:11 PM Илья Шипицин  > wrote:
> 
> 
> 
> вс, 7 июн. 2020 г. в 19:59, Stefano Tranquillini  >:
> 
> Hello all,
> 
> I'm moving to HA using it to replace NGINX and I've a question 
> regarding how to do a Rate Limiting in HA that enables queuing the requests 
> instead of closing them.
> 
> I was able to limit per IP following those examples: 
> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/ . 
> However, when the limit is reached, the users see the error and connection is 
> closed.
> 
> Since I come from NGINX, it has this handy feature 
> https://www.nginx.com/blog/rate-limiting-nginx/ where connections that exceed 
> the threshold are queued. Thus the user will still be able to do the calls 
> but be delayed without him getting errors and keep the overall number of 
> requests within threshold.
> 
> Is there anything similar in HA? It should limit/queueing the user by 
> IP.
> 
> To explain with an example, we have two users |Alice|, with ip 
> |A.A.A.A| and |Bob| with ip |B.B.B.B| The threshold is |30r/minute|.
> 
> So in 1 minute:
> 
>   * Alice does 20 requests. -> that's fine
>   * Bob does 60 requests. -> the system caps the requset to 30 and 
> then process the other 30 later on (maybe also adding timeout/delay)
>   * Alice does 50 request -> the first 40 are fine, the next 10 are 
> queued.
>   * Bob does 20 requests -> they are queue after the one above.
> 
> I saw that it can be done in general, by limiting the connections per 
> host. But this will mean that it's cross IP and thus, if 500 is the limit
> - Alice  does 1 call
> - Bob does 1000 calls
> - Alice does another 1 call
> - Alice will be queued, that's not what i would like to have.
> 
> is this possible? Is there anything similar that can be done?
> 
> 
> it is not cross IP.  I wish nginx docs would be better on that.
> 
> What do you mean?
> in nginx i do
> limit_req_zone $binary_remote_addr zone=prod:10m rate=40r/m;
> and works
> 
> first, in nginx terms it is limited by zone key. you can define key using 
> for example $binary_remote_addr$http_user_agent$ssl_client_ciphers
> that means each unique combination of those parameters will be limited by 
> its own counter (or you can use nginx maps to construct such a zone key)
> 
> in haproxy you can see and example of
> 
> # Track client by base32+src (Host header + URL path + src IP)
> 
> http-requesttrack-sc0 base32+src
> 
> which also means key definition may be as flexible as you can imagine.
> 
> 
> the point is, how can i cap the number of requests for a single user to 
> 40r/minute for example? or any number.
> 
> What I was able to do is to slow it down in this way, but it does not ensure 
> the cap per request, it only adds 500ms to each call.
> 
> frontend proxy
>     bind *:80
>     # ACL function declarations
>     acl is_first_level src_http_req_rate(Abuse) ge 30 
>     use_backend api_delay if is_first_level
>     use_backend api
> 
> backend api
>     server api01 api01:80  
>     server api02 api02:80
>     server api03 api03:80
> 
> backend api_delay
>     tcp-request inspect-delay 500ms
>     tcp-request content accept if WAIT_END
>     server api01 api01:80  
>     server api02 api02:80
>     server api03 api03:80
> 
> backend Abuse
>     stick-table type ip size 100k expire 15s store http_req_rate(10s)

I would try to use "http-request set-priority-class" and/or
"http-request set-priority-offset" for this.
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4.2-http-request%20set-priority-class
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4.2-http-request%20set-priority-offset

```
acl is_first_level src_http_req_rate(Abuse) ge 30
http-request set-priority int(20) if is_first_level

```

In the mailing list archive is a example how to use it
https://www.mail-archive.com/haproxy@formilux.org/msg29915.html

Sorry that I can't give you a better solution but I never used it
so it would be nice to get feedback if this options works for your
use case

> 
> Thanks
> -- 
> *Stefano*
> -- 
> Stefano

Regards
Aleks



Re: haproxy on embedded device

2020-06-05 Thread Aleksandar Lazic
Hi,

On 03.06.20 16:20, Thomas Schmiedl wrote:
> Hi,
> 
> maybe someone can help me in this issue. I use 
> xupnpd2 (https://github.com/clark15b/xupnpd2) on my router 
> (with this firmware extension: https://freetz.github.io/wiki/index.en.html) 
> to receive/transfer some hls-streams to the TV. Now I try to 
> receive/transfer some youtube-hls-livestreams 
> (e.g. https://www.youtube.com/watch?v=F2ARbcgQN1s). Because 
> xupnpd2 doesn't support SSL/TLS and the author doesn't want to add SSL/TLS 
> support (I'm not a developer), my idea is to use haproxy as reverse proxy 
> to receive the m3u8/ts via haproxy and "forward" unencrypted to xupnpd2.
> 
> A first test with a single ts file works on my ubuntu-pc, but it doesn't 
> work on the router. Thanks for your help.

Have you tried https://github.com/clark15b/xupnpd-live
as answered in
https://github.com/clark15b/xupnpd2/issues/2

> Best regards,
> Thomas
> 
> Here is the haproxy config (I always updated manually the 
> "googlevideo"-hostname):
> 
> global
> 
> defaults
>     mode http
>     timeout connect 5000
>     timeout client 5
>     timeout server 5
> 
> frontend main
>     mode http
>     bind *:8081
> 
>     acl is_ts path -m end .ts
>     use_backend segments
> 
> backend segments
>     http-request set-header Host r1---sn-4g5e6nss.googlevideo.com
>     server server1 r1---sn-4g5e6nss.googlevideo.com:443 ssl verify none
> 




Re: Termination state: CL--

2020-06-01 Thread Aleksandar Lazic
Hi.

Jun 1, 2020 1:37:55 PM Gaetan Deputier :

> Hello!
>
> We have recently observed that a very small amount of our connections were 
> ended with the following state: CL--. Those connections are coming from 
> browsers and are correlated to weird behaviours observed in our downstream 
> application (where a HTTP header and a body seem to be exchanged with another 
> request).
>
> Looking at the documentation, this state that:
>
> C : the TCP session was unexpectedly aborted by the client.
> L : the proxy was still transmitting LAST data to the client while the server 
> had already finished. This one is very rare as it can only happen when the 
> client dies while receiving the last packets.
>
> Does someone have more details about the L state specifically? What we should 
> we expect in our application in terms of sessions/packets/request?

Please can you share the anonymized config and the output of `haproxy -vv` and 
`uname -a`.

Regards
Aleks

> Thanks!
>
> G-
>




Re: decode key created with url32+src

2020-05-17 Thread Aleksandar Lazic
Tim.

Thank you for your prompt answer.

Regards

Aleks

On 18.05.20 01:30, Tim Düsterhus wrote:
> Aleks,
>
> Am 18.05.20 um 00:48 schrieb Aleksandar Lazic:
>> Is there a easy way to know which URL+src the key is?
>> [...]
>>   http-request track-sc1 url32+src table per_ip_and_url_rates unless { 
>> path_end .css .js .png .gif }
> No, as per the documentation:
>
>> url32+src : binary
>> This returns the concatenation of the "url32" fetch and the "src" fetch. The
>> resulting type is of type binary, with a size of 8 or 20 bytes depending on
>> the source address family. This can be used to track per-IP, per-URL 
>> counters.
> and
>
>> url32 : integer
>> This returns a 32-bit hash of the value obtained by concatenating the first
>> Host header and the whole URL including parameters (not only the path part of
>> the request, as in the "base32" fetch above). This is useful to track per-URL
>> activity. A shorter hash is stored, saving a lot of memory. The output type
>> is an unsigned integer.
> Thus you only have a hash value of the URL in question. However the IP
> address is stored in clear at the end of the resulting key. You might
> need to hex decode it.
>
> Best regards
> Tim Düsterhus
>



decode key created with url32+src

2020-05-17 Thread Aleksandar Lazic
Hi.

I have this lines in the Table per_ip_and_url_rates.
Is there a easy way to know which URL+src the key is?

# table: per_ip_and_url_rates, type: binary, size:1048576, used:56781
0x559813fc9200: key=xxx use=0 exp=85821390 http_req_rate(8640)=27
0x7fef40373630: key= use=0 exp=86380499 http_req_rate(8640)=4494

I used this blog post as base for the table.

https://www.haproxy.com/blog/bot-protection-with-haproxy/

That's the backend definition with HA-Proxy version 2.1.4-1ppa1~bionic

```
frontend https-in

  bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file {{ ansible_nodename 
}}/fullchain.pem crt /etc/ssl/haproxy/

  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  # DNS labels are case insensitive (RFC 4343), we need to convert the hostname 
into lowercase
  # before matching, or any requests containing uppercase characters will never 
match.
  # http-request set-header Host %[req.hdr(Host),lower]

  # 
https://www.haproxy.com/blog/application-layer-ddos-attack-protection-with-haproxy/
  http-request track-sc0 src table per_ip_rates
 
  # https://www.haproxy.com/blog/bot-protection-with-haproxy/
  # track client's source IP + URL accessed in
  # per_ip_and_url_rates stick table
  http-request track-sc1 url32+src table per_ip_and_url_rates unless { path_end 
.css .js .png .gif }
 
  # Set the threshold to 15 within the time period
  acl exceeds_limit sc_gpc0_rate(0) gt 20

  # Increase the new-page count if this is the first time
  # they've accessed this page, unless they've already
  # exceeded the limit
  #http-request sc-inc-gpc0(0) if { sc_http_req_rate(1) eq 1 } !exceeds_limit

  # Deny requests if over the limit
  #http-request deny deny_status 429 if exceeds_limit

  # 10 requests per second
  #http-request deny deny_status 429 if { sc_http_req_rate(0) gt 200 }

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy


  declare capture request len 128
  declare capture request len 148
  declare capture request len 148

  http-request capture req.hdr(host) len 148

  # Add CORS response header
  acl is_cors_preflight method OPTIONS
  http-response add-header Access-Control-Allow-Origin "*" if is_cors_preflight
  http-response add-header Access-Control-Allow-Methods "GET,POST" if 
is_cors_preflight
  http-response add-header Access-Control-Allow-Credentials "true" if 
is_cors_preflight
  http-response add-header Access-Control-Max-Age "600" if is_cors_preflight

  use_backend be_nginx if { path_beg /.well-known/acme-challenge/ }
  use_backend 
%[req.hdr(host),lower,map(/etc/haproxy/haproxy_backend.map,be_default)]
```

Thanks for help.

Cheers

Aleks



[PATCH] DOC/MINOR: halog: Add long help info for ic flag

2020-05-15 Thread Aleksandar Lazic
Hi.

attached a patch for halog.

Regards

Aleks
>From 37ba93a5f29200e34cfb31aacf93ddcd80fca2ab Mon Sep 17 00:00:00 2001
From: Aleksandar Lazi 
Date: Fri, 15 May 2020 22:58:30 +0200
Subject: [PATCH] DOC/MINOR: halog: Add long help info for ic flag

Add missing long help text for the ic (ip count) flag
---
 contrib/halog/halog.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/contrib/halog/halog.c b/contrib/halog/halog.c
index 91e2af357..3c785fc09 100644
--- a/contrib/halog/halog.c
+++ b/contrib/halog/halog.c
@@ -190,6 +190,7 @@ void help()
 	   " -cc   output number of requests per cookie code (2 chars)\n"
 	   " -tc   output number of requests per termination code (2 chars)\n"
 	   " -srv  output statistics per server (time, requests, errors)\n"
+	   " -ic   output statistics per ip count (time, requests, errors)\n"
 	   " -u*   output statistics per URL (time, requests, errors)\n"
 	   "   Additional characters indicate the output sorting key :\n"
 	   "   -u : by URL, -uc : request count, -ue : error count\n"
-- 
2.20.1



Re: [tcp|http]-check expect status explained

2020-05-07 Thread Aleksandar Lazic
Hi Christopher.

On 07.05.20 07:55, Christopher Faulet wrote:
> Le 07/05/2020 à 00:06, Aleksandar Lazic a écrit :
>> On 07.05.20 00:02, Lukas Tribus wrote:
>>> On Wed, 6 May 2020 at 23:33, Aleksandar Lazic  wrote:
>>>>
>>>> Hi.
>>>>
>>>> The doc for [tcp|http]-check expect have some *-status arguments like 
>>>> "L7OK", "L7OKC","L6OK" and "L4OK" and so on.
>>>>
>>>> In the whole documentation are this states not explained.
>>>> I'm not sure in which chapter this states fit's, quick reminder 
>>>> HTTP,global, logging, new chapter?
>>>> My suggestion is to add "1.2." HTTP in HAProxy and explain how the htx 
>>>> works and what layer 4+6+7 means, opinions.
>>>
>>> It's not in the configuration documentation, it's in the management doc:
>>>
>>> https://cbonte.github.io/haproxy-dconv/2.0/management.html#9.1
>>
>> Thanks, I don't look very often there but I should.
> 
> Hi Aleks,
> 
> You're right, it is not obvious. These status are reported in the stats and 
> are described in the management guide as Lukas said. But it is probably a 
> good idea to be more explicit. I slightly updated the configuration 
> documentation to not rely on internal names. I added a short description for 
> each status instead.

Cool thanks.

> Thanks,




Re: [tcp|http]-check expect status explained

2020-05-06 Thread Aleksandar Lazic
On 07.05.20 00:02, Lukas Tribus wrote:
> On Wed, 6 May 2020 at 23:33, Aleksandar Lazic  wrote:
>>
>> Hi.
>>
>> The doc for [tcp|http]-check expect have some *-status arguments like 
>> "L7OK", "L7OKC","L6OK" and "L4OK" and so on.
>>
>> In the whole documentation are this states not explained.
>> I'm not sure in which chapter this states fit's, quick reminder HTTP,global, 
>> logging, new chapter?
>> My suggestion is to add "1.2." HTTP in HAProxy and explain how the htx works 
>> and what layer 4+6+7 means, opinions.
> 
> It's not in the configuration documentation, it's in the management doc:
> 
> https://cbonte.github.io/haproxy-dconv/2.0/management.html#9.1

Thanks, I don't look very often there but I should.

> Lukas

Regards
Aleks



[tcp|http]-check expect status explained

2020-05-06 Thread Aleksandar Lazic
Hi.

The doc for [tcp|http]-check expect have some *-status arguments like "L7OK", 
"L7OKC","L6OK" and "L4OK" and so on.

In the whole documentation are this states not explained.
I'm not sure in which chapter this states fit's, quick reminder HTTP,global, 
logging, new chapter?
My suggestion is to add "1.2." HTTP in HAProxy and explain how the htx works 
and what layer 4+6+7 means, opinions.

I ask because I would like to create a table form the source to the 
documentation.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/checks.c;h=d5306defe05f77039562ba30fd1c1fcfec4b8f62;hb=HEAD#l106

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/checks.c;h=d5306defe05f77039562ba30fd1c1fcfec4b8f62;hb=HEAD#l139

Regards

Aleks



Re: Question about connection settings proto fcgi check maxconn 9 minconn 5 maxqueue 0

2020-05-04 Thread Aleksandar Lazic
Hi Christopher.

On 04.05.20 11:28, Christopher Faulet wrote:
> Le 03/05/2020 à 09:52, Aleksandar Lazic a écrit :
>> Hi.
>>
>> I play a little bit with proto fcgi and see something what I don't 
>> understand.
>>
>> Hopefully someone can explain it a bit.
>>
>> My php-fpm have the following settings.
>>
>> ```
>> pm = dynamic
>> pm.max_children = 10
>> pm.min_spare_servers = 4
>> pm.start_servers = 5
>> pm.max_spare_servers = 6
>> pm.max_requests = 500
>> ```
>>
>> The haproxy server line have the following settings.
>> Btw: is fullconn deprecated or removed in 2.2 because it's still in the doc?
>> I don't know if the health check connection is counted in the maxconn value 
>> or not, is it?
>>
>> ```
>> server server1 127.0.0.1:9000 proto fcgi check maxconn 9 minconn 5 maxqueue 0
>> ```
>>
>> Now my assumption is that I should never get this message in the php-fpm 
>> output because maxconn is 9 and pm.max_children = 10 but I got it.
>>
>>
>> ```
>> [03-May-2020 07:27:44] WARNING: [pool www] server reached pm.max_children 
>> setting (10), consider raising it
>> ```
>>
>> This means that the connections to php-fpm are higher then maxcon configured 
>> in haproxy and this is what I not understand.
>> Here  some haproxy logs from the overload.
>>
> 
> 
> Hi Aleks,
> 
> I've made some tests on my side and the fork politic of php-fpm seems to be a 
> bit strange 
> because with the same config and the check disabled, I've the same warning 
> and 10 php-fpm 
> children with only 3 clients. When I checked the opened connections, I only 
> have 4 connections. 
> So there is no reason to have 10 children.

Thank you for your time answer.

>> ```
>> # normal run
>> 127.0.0.1:36466 [03/May/2020:07:37:55.995] myproxy phpservers/server1 
>> 0/69/0/2/71 200 66318 - -  50/50/8/4/0 0/9 "GET /pinf.php HTTP/1.1"
>> 127.0.0.1:36462 [03/May/2020:07:37:56.000] myproxy phpservers/server1 
>> 0/66/0/2/68 200 66318 - -  50/50/7/4/0 0/8 "GET /pinf.php HTTP/1.1"
>> 127.0.0.1:36488 [03/May/2020:07:37:56.002] myproxy phpservers/server1 
>> 0/66/0/2/109 200 66318 - -  50/50/6/4/0 0/8 "GET /pinf.php HTTP/1.1"
>> 127.0.0.1:36470 [03/May/2020:07:37:56.004] myproxy phpservers/server1 
>> 0/106/0/2/109 200 66318 - -  50/50/5/4/0 0/8 "GET /pinf.php HTTP/1.1"
>> 127.0.0.1:36546 [03/May/2020:07:37:56.007] myproxy phpservers/server1 
>> 0/106/0/2/108 200 66318 - -  50/50/4/4/0 0/8 "GET /pinf.php HTTP/1.1"
>>
>> # I think this a queued request
>> 127.0.0.1:36576 [03/May/2020:07:37:55.334] myproxy phpservers/server1 
>> 0/69/0/1402/1471 200 66318 - -  50/50/3/3/0 0/38 "GET /pinf.php HTTP/1.1"
>> 127.0.0.1:36484 [03/May/2020:07:37:55.363] myproxy phpservers/server1 
>> 0/41/0/1402/1443 200 66318 - -  50/50/3/3/0 0/39 "GET /pinf.php HTTP/1.1"
>>
> 
> Here, these both requests were not queued more longer than others (time spent 
> in queue is the second value, 69 and 41). But php-fpm was a quite long to 
> response. 1.4 seconds to have the response headers.
> 
>> 127.0.0.1:36480 [03/May/2020:07:37:55.363] myproxy phpservers/server1 
>> 0/41/0/2402/2444 200 66318 - -  50/50/1/1/0 0/40 "GET /pinf.php HTTP/1.1"
>> 127.0.0.1:36486 [03/May/2020:07:37:55.363] myproxy phpservers/server1 
>> 0/42/0/2402/2444 200 66318 - -  50/50/0/0/0 0/41 "GET /pinf.php HTTP/1.1"
> 
> And here, 2.4 seconds. It may be the reason why the health checks timed out.

This could be because php-fpm forks childs.

> So maybe there is too many opened connections. It may be a problem with idle 
> connections. 
> But you should not trust php-fpm warnings :) Monitor connections really 
> established instead.

Thanks I will go that way.



Question about connection settings proto fcgi check maxconn 9 minconn 5 maxqueue 0

2020-05-03 Thread Aleksandar Lazic
Hi.

I play a little bit with proto fcgi and see something what I don't understand.

Hopefully someone can explain it a bit.

My php-fpm have the following settings.

```
pm = dynamic
pm.max_children = 10
pm.min_spare_servers = 4
pm.start_servers = 5
pm.max_spare_servers = 6
pm.max_requests = 500
```

The haproxy server line have the following settings.
Btw: is fullconn deprecated or removed in 2.2 because it's still in the doc?
I don't know if the health check connection is counted in the maxconn value or 
not, is it?

```
server server1 127.0.0.1:9000 proto fcgi check maxconn 9 minconn 5 maxqueue 0
```

Now my assumption is that I should never get this message in the php-fpm output 
because maxconn is 9 and pm.max_children = 10 but I got it.


```
[03-May-2020 07:27:44] WARNING: [pool www] server reached pm.max_children 
setting (10), consider raising it
```

This means that the connections to php-fpm are higher then maxcon configured in 
haproxy and this is what I not understand.
Here  some haproxy logs from the overload.

```
# normal run
127.0.0.1:36466 [03/May/2020:07:37:55.995] myproxy phpservers/server1 
0/69/0/2/71 200 66318 - -  50/50/8/4/0 0/9 "GET /pinf.php HTTP/1.1"
127.0.0.1:36462 [03/May/2020:07:37:56.000] myproxy phpservers/server1 
0/66/0/2/68 200 66318 - -  50/50/7/4/0 0/8 "GET /pinf.php HTTP/1.1"
127.0.0.1:36488 [03/May/2020:07:37:56.002] myproxy phpservers/server1 
0/66/0/2/109 200 66318 - -  50/50/6/4/0 0/8 "GET /pinf.php HTTP/1.1"
127.0.0.1:36470 [03/May/2020:07:37:56.004] myproxy phpservers/server1 
0/106/0/2/109 200 66318 - -  50/50/5/4/0 0/8 "GET /pinf.php HTTP/1.1"
127.0.0.1:36546 [03/May/2020:07:37:56.007] myproxy phpservers/server1 
0/106/0/2/108 200 66318 - -  50/50/4/4/0 0/8 "GET /pinf.php HTTP/1.1"

# I think this a queued request
127.0.0.1:36576 [03/May/2020:07:37:55.334] myproxy phpservers/server1 
0/69/0/1402/1471 200 66318 - -  50/50/3/3/0 0/38 "GET /pinf.php HTTP/1.1"
127.0.0.1:36484 [03/May/2020:07:37:55.363] myproxy phpservers/server1 
0/41/0/1402/1443 200 66318 - -  50/50/3/3/0 0/39 "GET /pinf.php HTTP/1.1"

# normal run
127.0.0.1:36576 [03/May/2020:07:37:56.807] myproxy phpservers/server1 0/0/0/1/1 
200 66318 - -  50/50/3/3/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36484 [03/May/2020:07:37:56.808] myproxy phpservers/server1 0/0/0/1/1 
200 66318 - -  50/50/3/3/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36576 [03/May/2020:07:37:56.809] myproxy phpservers/server1 0/0/0/1/2 
200 66318 - -  50/50/3/3/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36484 [03/May/2020:07:37:56.810] myproxy phpservers/server1 0/0/0/1/2 
200 66318 - -  50/50/2/2/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36576 [03/May/2020:07:37:56.812] myproxy phpservers/server1 0/0/0/1/1 
200 66318 - -  50/50/3/3/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36484 [03/May/2020:07:37:56.812] myproxy phpservers/server1 0/0/0/1/1 
200 66318 - -  50/50/2/2/0 0/0 "GET /pinf.php HTTP/1.1"

127.0.0.1:36480 [03/May/2020:07:37:55.363] myproxy phpservers/server1 
0/41/0/2402/2444 200 66318 - -  50/50/1/1/0 0/40 "GET /pinf.php HTTP/1.1"
127.0.0.1:36486 [03/May/2020:07:37:55.363] myproxy phpservers/server1 
0/42/0/2402/2444 200 66318 - -  50/50/0/0/0 0/41 "GET /pinf.php HTTP/1.1"

127.0.0.1:36480 [03/May/2020:07:37:57.808] myproxy phpservers/server1 0/0/0/1/2 
200 66318 - -  50/50/1/1/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36486 [03/May/2020:07:37:57.809] myproxy phpservers/server1 0/0/0/1/2 
200 66318 - -  50/50/0/0/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36480 [03/May/2020:07:37:57.811] myproxy phpservers/server1 0/0/0/1/2 
200 66318 - -  50/50/1/1/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36486 [03/May/2020:07:37:57.811] myproxy phpservers/server1 0/0/0/1/2 
200 66318 - -  50/50/0/0/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36480 [03/May/2020:07:37:57.814] myproxy phpservers/server1 0/0/0/2/2 
200 66318 - -  50/50/1/1/0 0/0 "GET /pinf.php HTTP/1.1"
127.0.0.1:36486 [03/May/2020:07:37:57.814] myproxy phpservers/server1 0/0/0/2/2 
200 66318 - -  50/50/0/0/0 0/0 "GET /pinf.php HTTP/1.1"

[WARNING] 123/073758 (7) : Health check for server phpservers/server1 failed, 
reason: Layer7 timeout, check duration: 2001ms, status: 2/3 UP.
Health check for server phpservers/server1 failed, reason: Layer7 timeout, 
check duration: 2001ms, status: 2/3 UP.
[WARNING] 123/073802 (7) : Health check for server phpservers/server1 failed, 
reason: Layer7 timeout, check duration: 2002ms, status: 1/3 UP.
Health check for server phpservers/server1 failed, reason: Layer7 timeout, 
check duration: 2002ms, status: 1/3 UP.
[WARNING] 123/073806 (7) : Health check for server phpservers/server1 failed, 
reason: Layer7 timeout, check duration: 2001ms, status: 0/2 DOWN.
Health check for server phpservers/server1 failed, reason: Layer7 timeout, 
check duration: 2001ms, status: 0/2 DOWN.
[WARNING] 123/073806 (7) : Server phpservers/server1 is DOWN. 0 active and 0 
backup servers left. 0 

Re: 'http-check connect default linger proto fcgi' keeps connections open?

2020-05-02 Thread Aleksandar Lazic
On 02.05.20 14:25, Aleksandar Lazic wrote:
> Hi.
> 
> May 2, 2020 9:43:40 AM Christopher Faulet :
> 
>> Le 02/05/2020 à 00:05, Aleksandar Lazic a écrit :
>>
>>> Hi.
>>> I wanted to use the shiny new http-check feature and have seen that the 
>>> connection keeps alive after the health check.
>>> I have also tried to remove "linger" but this does not change anything.
>>> Maybe I make something wrong.
>>>
>>>
>> Hi Aleks,
>>
>> You're right. There is a bug. And trying to fix it, I found 2 others :) It 
>> was a wrong test on the FCGI connection flags. Because of this bug, the 
>> connection remains opened till the server timeout. I pushed fixes in 
>> upstream. It should be ok now.
> 
> Ah cool.
> 
> I will try the next snapshot.

Okay I was to impatient to see if it works and have now build the Docker image 
from git clone ;-)
I can confirm that this bug was fixed.

Now we have a active testing loadbalancer for php-fpm and other fcgi backends 
8-O

>> Thanks,

Thanks Christopher.

Best regards
Aleks



Re: 'http-check connect default linger proto fcgi' keeps connections open?

2020-05-02 Thread Aleksandar Lazic
Hi.

May 2, 2020 9:43:40 AM Christopher Faulet :

> Le 02/05/2020 à 00:05, Aleksandar Lazic a écrit :
>
> > Hi.
> > I wanted to use the shiny new http-check feature and have seen that the 
> > connection keeps alive after the health check.
> > I have also tried to remove "linger" but this does not change anything.
> > Maybe I make something wrong.
> >
> >
> Hi Aleks,
>
> You're right. There is a bug. And trying to fix it, I found 2 others :) It 
> was a wrong test on the FCGI connection flags. Because of this bug, the 
> connection remains opened till the server timeout. I pushed fixes in 
> upstream. It should be ok now.

Ah cool.

I will try the next snapshot.

> Thanks,
>
>






'http-check connect default linger proto fcgi' keeps connections open?

2020-05-01 Thread Aleksandar Lazic
Hi.

I wanted to use the shiny new http-check feature and have seen that the 
connection keeps alive after the health check.
I have also tried to remove "linger" but this does not change anything.
Maybe I make something wrong.

My setup:

I used here the docker hub haproxy Dockerfile and just used the snapshot from 
1st May.
Shell 01: podman run --rm -it -p 8080:8080 -v 
/tmp/haproxy-config:/usr/local/etc/haproxy --network host hap-snap
Shell 02: podman run --rm -it -p 9000:9000 --network host -v 
/tmp/php-root:/var/www/html -v /tmp/php-conf:/mnt php:7.4-fpm --fpm-config 
/mnt/php-fpm.conf --force-stderr
Shell 03: ss  --tcp |egrep 9000 # this shows 'ESTAB  0   0  
127.0.0.1:58076  127.0.0.1:9000'

You can easily replace podman with docker.

I get without any user request the following message from php-fpm.
```
[01-May-2020 21:50:32] NOTICE: fpm is running, pid 1
[01-May-2020 21:50:32] NOTICE: ready to handle connections
[01-May-2020 21:51:12] WARNING: [pool www] server reached pm.max_children 
setting (20), consider raising it
^C[01-May-2020 21:51:33] NOTICE: Terminating ...
[01-May-2020 21:51:33] NOTICE: exiting, bye-bye!

```

The configs:

```
podman run --rm -it -p 8080:8080 -v /tmp/haproxy-config:/usr/local/etc/haproxy 
--network host hap-snap haproxy -vv
HA-Proxy version 2.2-dev6-a911548 2020/04/30 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 5.3.0-45-generic #37-Ubuntu SMP Thu Mar 26 20:41:27 UTC 2020 
x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
-Wno-missing-field-initializers -Wno-implicit-fallthrough 
-Wno-stringop-overflow -Wno-cast-function-type -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_ZLIB=1

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS 
+DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL 
+THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
Running on OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with gcc compiler version 8.3.0
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.32 2018-09-10
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with the Prometheus exporter as a service

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS

Available services :
prometheus-exporter

Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[TRACE] trace
[COMP] compression
```

HAProxy config:
```
global
log stdout format raw daemon debug

defaults
log global

modehttp
option  httplog
option  dontlognull
option log-health-checks

timeout connect 5s
timeout client  50s
timeout server  50s

frontend myproxy
bind :8080
default_backend phpservers

backend phpservers
use-fcgi-app php-fpm

option httpchk
http-check connect default linger proto fcgi
http-check send meth GET uri /ping ver HTTP/1.1
http-check expect string pong

server server1 127.0.0.1:9000 proto fcgi check

fcgi-app php-fpm
log-stderr global
docroot /var/www/html
index index.php
path-info ^(/.+\.php)(/.*)?$

```

PHP Config
```
egrep -v '^(;|$)' /tmp/php-conf/php-fpm.conf
[global]
pid = /run/php7.4-fpm.pid

OT: I love this Project ;-)

2020-04-22 Thread Aleksandar Lazic
Hi all.

I know it's a little bit off topic but because I have in another project 
reached a big milestone, with the support of the People here, I would like to 
say.

HAProxy People and Community and Program is really great ;-) ;-) ;-) ;-).

Very best wishes

Aleks



Re: Log Backend call

2020-04-18 Thread Aleksandar Lazic
I have created a issue for this.

https://github.com/haproxy/haproxy/issues/589

On 19.04.20 00:15, Aleksandar Lazic wrote:
> Hi.
> 
> I haven't seen any option to log the request after the `http-request set-... 
> ` phase.
> 
> Is this covered in %HP or is this the request from the client?
> 
> That's the code and it looks to me that this isn't set after the rewrite 
> phase.
> 
> http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/log.c;hb=dfad6a41ad9f012671b703788dd679cf24eb8c5a#l2693
> 
> The use case is that I need to know how the http request looks to the backend 
> after the backend.
> 
> A tcpdump isn't possible because the backend is a TLS one.
> 
> It would be nice to have also a similar output in the debug mode as for the 
> client request.
> 
> ```
> 
> 0002:https-in.accept(0009)=002b from [:::Client-IP:34452] ALPN=h2
> 0002:https-in.clireq[002b:]: GET 
> https://DOMAIN.com/img/logo-entrypages.png HTTP/2.0
> 0002:https-in.clihdr[002b:]: user-agent: curl/7.65.3
> 0002:https-in.clihdr[002b:]: accept: */*
> 0002:https-in.clihdr[002b:]: host: DOMAIN.com
> 
> Suggested output after rewrite
> 
> 0002:https-out.connect(0010)=002b from [:::DEST-IP:DEST-PORT] ALPN=h1
> 0002:https-out.srvreq[002b:]: GET 
> https://REWRITTEN.com/NEW_PATH/img/logo-entrypages.png HTTP/2.0
> 0002:https-out.srvhdr[002b:]: user-agent: curl/7.65.3
> 0002:https-out.srvhdr[002b:]: accept: */*
> 0002:https-out.srvhdr[002b:]: host: REWRITTEN.com
> 
> 0002:be_static.srvrep[002b:002c]: HTTP/1.1 401 Unauthorized
> 0002:be_static.srvhdr[002b:002c]: content-length: 131
> 0002:be_static.srvhdr[002b:002c]: content-type: text/html; charset=UTF-8
> 0002:be_static.srvhdr[002b:002c]: www-authenticate: Swift realm="Client"
> 0002:be_static.srvhdr[002b:002c]: www-authenticate: Keystone 
> uri="https://auth.cloud.ovh.net/;
> 0002:be_static.srvhdr[002b:002c]: x-trans-id: tx011f76ce9d9f43a09dcea-...
> 0002:be_static.srvhdr[002b:002c]: x-openstack-request-id: 
> tx011f76ce9d9f43a09dcea-...
> 0002:be_static.srvhdr[002b:002c]: date: Sat, 18 Apr 2020 21:59:48 GMT
> 0002:be_static.srvhdr[002b:002c]: x-iplb-instance: ...
> 0002:be_static.srvcls[002b:002c]
> 0002:be_static.clicls[002b:002c]
> 0002:be_static.closed[002b:002c]
> 
> ```
> 
> Opinions?
> 
> Regards
> 
> Aleks
> 




Log Backend call

2020-04-18 Thread Aleksandar Lazic
Hi.

I haven't seen any option to log the request after the `http-request set-... ` 
phase.

Is this covered in %HP or is this the request from the client?

That's the code and it looks to me that this isn't set after the rewrite phase.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/log.c;hb=dfad6a41ad9f012671b703788dd679cf24eb8c5a#l2693

The use case is that I need to know how the http request looks to the backend 
after the backend.

A tcpdump isn't possible because the backend is a TLS one.

It would be nice to have also a similar output in the debug mode as for the 
client request.

```

0002:https-in.accept(0009)=002b from [:::Client-IP:34452] ALPN=h2
0002:https-in.clireq[002b:]: GET 
https://DOMAIN.com/img/logo-entrypages.png HTTP/2.0
0002:https-in.clihdr[002b:]: user-agent: curl/7.65.3
0002:https-in.clihdr[002b:]: accept: */*
0002:https-in.clihdr[002b:]: host: DOMAIN.com

Suggested output after rewrite

0002:https-out.connect(0010)=002b from [:::DEST-IP:DEST-PORT] ALPN=h1
0002:https-out.srvreq[002b:]: GET 
https://REWRITTEN.com/NEW_PATH/img/logo-entrypages.png HTTP/2.0
0002:https-out.srvhdr[002b:]: user-agent: curl/7.65.3
0002:https-out.srvhdr[002b:]: accept: */*
0002:https-out.srvhdr[002b:]: host: REWRITTEN.com

0002:be_static.srvrep[002b:002c]: HTTP/1.1 401 Unauthorized
0002:be_static.srvhdr[002b:002c]: content-length: 131
0002:be_static.srvhdr[002b:002c]: content-type: text/html; charset=UTF-8
0002:be_static.srvhdr[002b:002c]: www-authenticate: Swift realm="Client"
0002:be_static.srvhdr[002b:002c]: www-authenticate: Keystone 
uri="https://auth.cloud.ovh.net/;
0002:be_static.srvhdr[002b:002c]: x-trans-id: tx011f76ce9d9f43a09dcea-...
0002:be_static.srvhdr[002b:002c]: x-openstack-request-id: 
tx011f76ce9d9f43a09dcea-...
0002:be_static.srvhdr[002b:002c]: date: Sat, 18 Apr 2020 21:59:48 GMT
0002:be_static.srvhdr[002b:002c]: x-iplb-instance: ...
0002:be_static.srvcls[002b:002c]
0002:be_static.clicls[002b:002c]
0002:be_static.closed[002b:002c]

```

Opinions?

Regards

Aleks



New color on www.haproxy.org

2020-04-18 Thread Aleksandar Lazic
Hi.

I like the new table on https://www.haproxy.org/ . The color show now much 
easier which version is in which state ;-)

Regards

Aleks



Re: HAproxy Error

2020-04-16 Thread Aleksandar Lazic

On 16.04.20 10:57, Willy Tarreau wrote:

On Thu, Apr 16, 2020 at 10:26:54AM +0200, Willy Tarreau wrote:

Hi Lukas,

On Thu, Apr 16, 2020 at 09:44:39AM +0200, Lukas Tribus wrote:

Provide the output of "which haproxy" and "haproxy -vv", I doubt you
are actually running the Redhat package you indicated, more likely you
built Haproxy manually from source on top of it.


This just makes me think that on glibc-based systems we could use
getauxval() to report the full path name to the executable in case
of error. I'll see if I can come up with something usable.


OK I did this:

   $ env PATH=$PWD:$PATH haproxy -c -f mini4.cfg
   [WARNING] 106/105513 (30832) : Setting tune.ssl.default-dh-param to 1024 by 
default, if your workload permits it you should set it to at least 2048. Please 
set a value >= 1024 to make this warning disappear.
   [NOTICE] 106/105513 (30832) : haproxy version is 2.2-dev5-bb8698-83
   [NOTICE] 106/105513 (30832) : path to executable is /g/public/haproxy/haproxy
   [ALERT] 106/105513 (30832) : Some warnings were found and 'zero-warning' is 
set. Aborting.

It also reports the version string, which can sometimes help when
dealing with multiple installations. The path to the executable is
only available on glibc for now (we're using the kernel's AUX vector).
This could possibly be improved to at least report argv[0] when not
available. But let's see how this works. If it helps we could even
backport it.


Hey that's really cool ;-)


Willy






Re: Disclaimer in emails

2020-04-15 Thread Aleksandar Lazic

On 15.04.20 16:28, Lukas Tribus wrote:

Hello Tim, Aleks,

I fully agree with everything Tim just said.

Let's keep the list about haproxy.


I agree with this line only.


Lukas






Re: HAproxy Error

2020-04-15 Thread Aleksandar Lazic

On 15.04.20 14:57, Tim Düsterhus wrote:

Aleks,
Ilya,

Am 15.04.20 um 14:35 schrieb Aleksandar Lazic:

Useless disclaimer for a public mailing list!



Am 15.04.20 um 14:25 schrieb Илья Шипицин:> hello.

should we destroy this message ?


Can we please stop complaining about these disclaimers? It adds more
noise to the list than the disclaimers itself which are neatly tucked
away at the end of the email.


Nope, because I only add my complaining when I answer a question or try
to answer a question.

I agree with you that just to talk about the disclaimer increases the
noise, but sometimes is a discussion necessary, IMHO.


I'm fully aware that they are of dubious value, but your complaints are
not going to change anything about them. The HAProxy users seeking help
on the list are not going to be able to get rid of them, because that's
usually a decision made by their non-technical management.


That's a assumption which is not always true, as far as I know. There are
some persons which can decide by them self which e-mail address they use.


Maybe they
are even legally necessary in some legislation on this world. I don't
know, because I'm not a lawyer proficient in law of all 195 countries.


Me neither but the most reason to ad this disclaimer is legal certainty
which is useless, from my point of view.


The users came here to receive help with HAProxy and not to be berated.


I don't see the notice about useless disclaimer as berate. From my point
of view is the information that the e-mail disclaimer to a public mailing
list a clarification not a berate.


FWIW: If I was sending mail to this list from $COMPANY email you would
need to live with a 14 line signature that is not a disclaimer, but
similarly is snail mail information of $COMPANY that I'm required to add
by law.


A snail mail disclaimer is not the same as e-mail disclaimer.

https://en.wikipedia.org/wiki/Disclaimer
https://en.wikipedia.org/wiki/Email_disclaimer#Effectiveness_of_disclaimers


Best regards
Tim Düsterhus


Best regards
Aleks



Re: HAproxy Error

2020-04-15 Thread Aleksandar Lazic

Hi.

On 15.04.20 13:39, bindushree...@cognizant.com wrote:

++Adding attachement

Thank you in advance.

Thanks,

Bindushree D B

*From:* D B, Bindushree (Cognizant)
*Sent:* Wednesday, April 15, 2020 5:08 PM
*To:* haproxy@formilux.org
*Subject:* HAproxy Error
*Importance:* High

Hi Team,

We are in the process of using newer HAproxy version.

Below is the scenario explained where we are stuck.

·In RHEL 8.1 version, installed the latest version of the application.

         haproxy-1.8.15-6.el8_1.1.x86_64

# yum info haproxy


haproxy -vv is much better then the rpm info.


Red Hat Update Infrastructure 3 Client Configuration Server 8   
 12 kB/s | 2.1 kB 
00:00
Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (RPMs)  
     24 kB/s | 2.8 kB 
00:00
Red Hat Enterprise Linux 8 for x86_64 - BaseOS from RHUI (RPMs) 
 21 kB/s | 2.4 kB 
00:00


Have you asked Red Hat about this issue, maybe there is a nother rpm which have 
ssl build in?


Installed Packages

Name     : haproxy
Version  : 1.8.15
Release  : 6.el8_1.1
Architecture : x86_64
Size : 4.4 M
Source   : haproxy-1.8.15-6.el8_1.1.src.rpm
Repository   : @System

 From repo    : rhel-8-appstream-rhui-rpms


[snipp]


·So after this configuration file is updated with our configurations. We need 
to use multiple certificates (SNI) hence used bind option to verify the 
certificates under the folder. But receiving below error please help us on 
priority.

Attached configuration file for reference.

Below is the error we are receiving.

# haproxy -f /etc/haproxy/haproxy.cfg -c

[ALERT] 105/113215 (5684) : parsing [/etc/haproxy/haproxy.cfg:33] : 'bind 
*:443' unknown keyword 'ssl'. Registered keywords :


Well it looks like the haproxy isn't build with openssl that's the reason why 
the haproxy -vv output is required.

http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-ssl

[snipp]


[ALERT] 105/113215 (5684) : parsing [/etc/haproxy/haproxy.cfg:36] : error 
detected while parsing an 'http-request set-header' condition : unknown fetch 
method 'ssl_fc' in ACL expression 'ssl_fc'.

[ALERT] 105/113215 (5684) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

Please check the error and configuration and let me know what needs to done to 
fix the issue.

Thanks,

Bindushree D B

This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored. This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of 
this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored.


Useless disclaimer for a public mailing list!

Regards
Aleks



Re: Question regarding increasing requests more than 32kb

2020-04-14 Thread Aleksandar Lazic

Hi Aravind.

On 14.04.20 06:42, Aravind Viswanathan wrote:

Hi Alek,

Thanks for the response.

Could you please let me know if these parameters need to be set on Global or 
Defaults section?


Please be so kind and read the documentation there is shown in which section 
this parameter have
to be set, because it's not just set a parameter.
I strongly recommend to understand what this parameter do, which impact it have 
and why it's
in the specific section.


Regards,
Aravind Viswanathan

-Original Message-
From: Aleksandar Lazic 
Sent: Monday, April 13, 2020 4:14 PM
To: Aravind Viswanathan 
Cc: haproxy@formilux.org
Subject: Re: Question regarding increasing requests more than 32kb

Hi.

On 13.04.20 08:18, Aravind Viswanathan wrote:

Hi Team,

Good Morning.

We are using HaProxy as a load balancer in our bitbucket system and
Bitbucket is linked to JIRA via Application links.


Please can you share the haproxy version and your config.

haproxy -vv


Recently we noticed an error in our JIRA log

2020-04-01 03:08:23,477 Caesium-1-3 ERROR ServiceRunner
[c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh
failure

com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure:
Data Provider refresh failed with error code 400 and message - HTTP
status 400 Bad request]

and when we checked the same with Atlassian support they said we need
to configure request going through HAProxy is allowed as big as 32kb.
I thought Increasing the maxconn might solve this but later I
understood,

maxconn
Sets the maximum per-process number of concurrent connections to .

Could you please advise how to configure request going through HAProxy
is allowed as big as 32kb?


You could take a look to this parameter which describes the correlation between 
tune.bufsize, tune.maxrewrite and maxconn.

https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F2.1%2Fconfiguration.html%23tune.bufsizedata=02%7C01%7CAravind.Viswanathan%40efi.com%7C01112b41876642f7c4bc08d7df97a09c%7C3fe4532499b245c397517034bae71475%7C0%7C0%7C637223714645124453sdata=miZ7S%2FByUOE00tsvHLDzeMNgY1CBRagyQmFKhqYfhgc%3Dreserved=0


/Regards,/

/Aravind Viswanathan/

Confidentiality notice: This message may contain confidential information.
It is intended only for the person to whom it is addressed. If you are
not that person, you should not use this message. We request that you
notify us by replying to this message, and then delete all copies
including any contained in your reply. Thank you.


Again a useless text as you send the Mail to a public mailing list.
https://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FEmail_disclaimerdata=02%7C01%7CAravind.Viswanathan%40efi.com%7C01112b41876642f7c4bc08d7df97a09c%7C3fe4532499b245c397517034bae71475%7C0%7C0%7C637223714645124453sdata=JoLQJJ6ArmUC7jAT%2BDTMGtHINcnhHVV0QO%2FSW1z%2BVco%3Dreserved=0

Regards
Aleks






Re: Use lua setting a server port

2020-04-13 Thread Aleksandar Lazic
Apr 13, 2020 2:43:06 PM io Sen :

> we have a function to setting server ip address : set_addr
> and we have a function to query server ip address and server port now : 
> get_addr
> but not have a setting server port function ,
> Is it possible for set_addr to support setting ports?
> like this:
> set_addr(ip,port)
> or
> set_port(8443)

Well it looks like that there should be added a new function like 
hlua_server_set_addr 

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/hlua_fcn.c;h=a9c7fe507fda3e404a3ce0770224a280118966f6;hb=HEAD

Maybe you can sent a patch to add the set_port function.
There is a update_server_addr_port function which could be used for the lua 
wrapper.

Regards
Aleks





Re: Question regarding increasing requests more than 32kb

2020-04-13 Thread Aleksandar Lazic

Hi.

On 13.04.20 08:18, Aravind Viswanathan wrote:

Hi Team,

Good Morning.

We are using HaProxy as a load balancer in our bitbucket system and Bitbucket 
is linked to JIRA via Application links.


Please can you share the haproxy version and your config.

haproxy -vv


Recently we noticed an error in our JIRA log

2020-04-01 03:08:23,477 Caesium-1-3 ERROR ServiceRunner 
[c.a.j.p.devstatus.provider.DefaultDevSummaryPollService] Refresh failure


com.atlassian.jira.plugin.devstatus.provider.DataProviderRefreshFailure: 
Data Provider refresh failed with error code 400 and message - HTTP status 400 Bad request]


and when we checked the same with Atlassian support they said we need to 
configure request going through HAProxy is allowed as big as 32kb. I thought 
Increasing the maxconn might solve this but later I understood,


maxconn
Sets the maximum per-process number of concurrent connections to .

Could you please advise how to configure request going through HAProxy is 
allowed as big as 32kb?


You could take a look to this parameter which describes the correlation between
tune.bufsize, tune.maxrewrite and maxconn.

http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#tune.bufsize


/Regards,/

/Aravind Viswanathan/

Confidentiality notice: This message may contain confidential information. 
It is intended only for the person to whom it is addressed. If you are not 
that person, you should not use this message. We request that you notify us 
by replying to this message, and then delete all copies including any contained 
in your reply. Thank you.


Again a useless text as you send the Mail to a public mailing list.
https://en.wikipedia.org/wiki/Email_disclaimer

Regards
Aleks



Re: interpreting haproxy 2.1 EOL statement

2020-04-12 Thread Aleksandar Lazic

On 03.03.20 15:28, Andrew McDermott wrote:


Hi,

 From the following:

   $ ~/git.haproxy.org/haproxy-2.1/haproxy  -v
   HA-Proxy version 2.1.3-ce757f-13 2020/02/21 - https://haproxy.org/
   Status: stable branch - will stop receiving fixes around Q1 2021.
   Known bugs: http://www.haproxy.org/bugs/bugs-2.1.3.html

that EOL is around Q1 2021.

Broadly, when is that? Jan-March 2021?

TIA


To add a further one will the 2.2 be a LTS one?

Regards
Aleks



FYI OKD (OpenShift Origin) 3.11 Router with HAProxy 2.1 and TLS 1.3

2020-04-11 Thread Aleksandar Lazic

Hi.

There was a question in the openshift-dev channel from @Josef how to build the 
OKD 3.11 Router with newer HAProxy version.

I have now created a new OKD Router image with HAProxy 2.1 and TLS 1.3 and 
pushed it to docker hub.
https://hub.docker.com/repository/docker/me2digital/okd-router-hap21

Feedback is very welcome

Regards

Aleks



Check if backup server is active

2020-04-08 Thread Aleksandar Lazic

Hi.

I try to use automatically the backend server when the primary serve is not 
available.

The following snipplet is my solution with haproxy (2.1.3-3ppa1~bionic).
Is there a bette solution or is this a okay solution from HAProxy point of view?

```
backend be_static
  log global
  option httpchk GET {{ http_checks["static_http"]}} HTTP/1.1\r\nHost:\ {{ 
hosts["static_http"]}}

  # check if the primary server is up
  acl use_prim srv_is_up(static_prim)
  http-request set-header Host {{ hosts["static_http"] }} if use_prim

  http-request set-header Host {{ hosts["static_storage"] }} if ! use_prim
  http-request set-path /v1/AUTH_OBJ-URL/Static%[path] if ! use_prim

  server static_prim {{ hosts["static_http"] }}:443 resolvers mydns ssl check check-ssl check-sni 
{{ hosts["static_http"] }} sni str({{ hosts["static_http"] }}) ca-file 
/etc/haproxy/letsencryptauthorityx3.pem
  server static_stor {{ hosts["static_storage"] }}:443 resolvers mydns ssl check check-ssl 
check-sni {{ hosts["static_storage"] }} sni str({{ hosts["static_storage"] }}) ca-file 
/etc/haproxy/Sectigo_RSA_Domain_Validation_Secure_Server_CA.pem backup
```

Best regards

Aleks



Re: Crazy anomaly!

2020-04-08 Thread Aleksandar Lazic

Hi Nicolas.

On 08.04.20 20:34, Nicolas Pujol wrote:

Hi,

I installed haproxy and two test servers with the basic configuration of nginx 
+ listening on port 443. The HAProxy server provides the Let's encrypt SSL 
certificates.

When I consult the 2 sites in HTTP, I have no problem.

With HTTPS it works *_sometimes_*. Mostly from smartphone but not from compuers 
(it depends...)!

So when I try to understand:
#haproxy -d -f /etc/haproxy/haproxy.cfg


Please can you tell us which haproxy version you use and what's the config is.

haproxy -vv
cat /etc/haproxy/haproxy.cfg


When the browser display the page, I have only one line on the logs:

:https_bind.accept(0008)=000b from [ip_fom_home:41009] ALPN=


When that's don't work, mostly from a computer but sometimes from smartphone 
even if the browser displayed the page some seconds before I had always 10 
accept lines:

001e:https_bind.accept(0008)=000f from [ip_from_home:41133] ALPN=
001e:https_bind.clicls[000f:]
001e:https_bind.closed[000f:]
001f:https_bind.accept(0008)=000f from [ip_from_home:41135] ALPN=
001f:https_bind.clicls[000f:]
001f:https_bind.closed[000f:]
0020:https_bind.accept(0008)=000f from [ip_from_home:41137] ALPN=
0020:https_bind.clicls[000f:]
0020:https_bind.closed[000f:]
0021:https_bind.accept(0008)=000f from [ip_from_home:41139] ALPN=
0021:https_bind.clicls[000f:]
0021:https_bind.closed[000f:]
0022:https_bind.accept(0008)=000f from [ip_from_home:41141] ALPN=
0022:https_bind.clicls[000f:]
0022:https_bind.closed[000f:]
0023:https_bind.accept(0008)=000f from [ip_from_home:41143] ALPN=
0023:https_bind.clicls[000f:]
0023:https_bind.closed[000f:]
0024:https_bind.accept(0008)=000f from [ip_from_home:41145] ALPN=
0024:https_bind.clicls[000f:]
0024:https_bind.closed[000f:]
0025:https_bind.accept(0008)=000f from [ip_from_home:41147] ALPN=
0025:https_bind.clicls[000f:]
0025:https_bind.closed[000f:]
0026:https_bind.accept(0008)=000f from [ip_from_home:41149] ALPN=
0026:https_bind.clicls[000f:]
0026:https_bind.closed[000f:]
0027:https_bind.accept(0008)=000f from [ip_from_home:41151] ALPN=
0027:https_bind.clicls[000f:]
0027:https_bind.closed[000f:]


Do you know where the problem come from? I don't understand why that's work but 
not every time.

When that's don't work, I restart haproxy and that's work again or I refresh 
the browser page around 15 times and the page is displayed again!

Thanks

Nicolas





Re: [*EXT*] Re: 503 SC with fcgi

2020-04-08 Thread Aleksandar Lazic

On 08.04.20 09:46, Ionel GARDAIS wrote:

Oh my !
This is a chroot issue.

haproxy is running in chroot but the fpm socket is outside.
When placing the socket inside the jail, it works with the socket.

Does the performance difference between IP and socket is worth the trouble ?

I'm sure there are some but I use it not because of performance.


Regards
Aleks



Re: [*EXT*] Re: 503 SC with fcgi

2020-04-08 Thread Aleksandar Lazic

On 08.04.20 08:52, Ionel GARDAIS wrote:

It works with 127.0.0.1:29001 (the listener I configured for this pool)


That's an important successful test.
I personally prefer the tcp way to afoid such problems.


About the socket :
- it lives in /run/php with
$ ls -alF /run/php/speedtest-fpm.sock
srw-rw 1 www-data www-data 0 Apr  7 21:11 /run/php/speedtest.sock=

- /run/php is owned by www-data:www-data with 755 perms
$ ls -alF /run/ | grep php
drwxr-xr-x  2 www-datawww-data 180 Apr  7 21:11 php/

- haproxy user is member of www-data group
$ groups haproxy
haproxy : haproxy www-data


Please can you share the config of your haproxy.



Debug logs are silent about any permission problem :

$ egrep -i "(cgi|fpm|sock|perm)" /var/log/haproxy.log
Apr  8 08:42:36 ns3089939 haproxy[1151]: #011[FCGI] fcgi-app
Apr  8 08:42:36 ns3089939 haproxy[1151]: [WARNING] 098/084236 (1151) : Failed 
to connect to the old process socket '/run/haproxy/admin.sock'
Apr  8 08:42:36 ns3089939 haproxy[1151]: [ALERT] 098/084236 (1151) : Failed to 
get the sockets from the old process!


Are you running haproxy with  chroot setuped?
Can you try to run the following.

Stop haproxy.
strace -tTfveall -a1024 -s1024 -o haproxy-trace.txt haproxy -f 
 -d
Make a request and stop then haproxy.
Compress haproxy-trace.txt and share here.


Apr  8 08:42:36 ns3089939 haproxy[1151]: Proxy bck-speed-fpm started.
Apr  8 08:43:00 ns3089939 haproxy[1152]: 2a01:cb00:663:fd00:20b5:6759:d972:be50:63090 
[08/Apr/2020:08:43:00.648] ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 2/1/0/0/3 
0/0 "GET /backend/getIP.php?isp=true=km=0.7819314534908071 HTTP/1.1"
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:bck-speed-fpm.clicls[0026:0025]
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:bck-speed-fpm.closed[0026:0025]
Apr  8 08:43:00 ns3089939 haproxy[1152]: 2a01:cb00:663:fd00:20b5:6759:d972:be50:63091 
[08/Apr/2020:08:43:00.896] ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 
2/1/0/0/3 0/0 "GET /backend/empty.php?r=0.7112678877934051 HTTP/1.1"
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0015:bck-speed-fpm.clicls[0025:0026]
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0015:bck-speed-fpm.closed[0025:0026]
Apr  8 08:43:01 ns3089939 haproxy[1152]: 2a01:cb00:663:fd00:20b5:6759:d972:be50:63092 
[08/Apr/2020:08:43:01.724] ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 
2/1/0/0/3 0/0 "GET /backend/empty.php?r=0.9895406737322412 HTTP/1.1"
Apr  8 08:43:01 ns3089939 haproxy[1151]: 
0016:bck-speed-fpm.clicls[0025:0026]
Apr  8 08:43:01 ns3089939 haproxy[1151]: 
0016:bck-speed-fpm.closed[0025:0026]
Apr  8 08:43:02 ns3089939 haproxy[1152]: 2a01:cb00:663:fd00:20b5:6759:d972:be50:63093 
[08/Apr/2020:08:43:02.041] ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 
2/1/0/0/3 0/0 "GET /backend/empty.php?r=0.7692377905794734 HTTP/1.1"
Apr  8 08:43:02 ns3089939 haproxy[1151]: 
0017:bck-speed-fpm.clicls[0025:0027]
Apr  8 08:43:02 ns3089939 haproxy[1151]: 
0017:bck-speed-fpm.closed[0025:0027]


One calling sequence is :

Apr  8 08:43:00 ns3089939 haproxy[1151]: 0014:ft-secure.accept(000b)=0026 from 
[2a01:cb00:663:fd00:20b5:6759:d972:be50:63090] ALPN=
Apr  8 08:43:00 ns3089939 haproxy[1151]: 0014:ft-secure.clireq[0026:]: GET 
/backend/getIP.php?isp=true=km=0.7819314534908071 HTTP/1.1
Apr  8 08:43:00 ns3089939 haproxy[1152]: 2a01:cb00:663:fd00:20b5:6759:d972:be50:63090 
[08/Apr/2020:08:43:00.648] ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 2/1/0/0/3 
0/0 "GET /backend/getIP.php?isp=true=km=0.7819314534908071 HTTP/1.1"
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: host: server
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: accept-encoding: gzip, deflate
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: cookie: NG_TRANSLATE_LANG_KEY=%22fr%22
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: accept: */*
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: user-agent: Mozilla/5.0 (Macintosh; 
Intel Mac OS X 10_11_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1.2 
Safari/605.1.15
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: accept-language: fr-fr
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: referer: 
https://server/speedtest_worker.js?r=0.5199684076229355
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:ft-secure.clihdr[0026:]: dnt: 1
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:bck-speed-fpm.clicls[0026:0025]
Apr  8 08:43:00 ns3089939 haproxy[1151]: 
0014:bck-speed-fpm.closed[0026:0025]








Re: [*EXT*] Re: 503 SC with fcgi

2020-04-07 Thread Aleksandar Lazic

On 07.04.20 21:17, Ionel GARDAIS wrote:

Alexander,

I had it working by using a classic IP:port listener for php-fpm.
neither /run/php/speedtest-fpm.sock nor unix@/run/php/speedtest-fpm.sock 
<mailto:unix@/run/php/speedtest-fpm.sock> worked


Can you run HAProxy in debug mode?
What's the permission for the socket?
Can you try to use 127.0.0.1:9000 instead of the unix socket, just to be sure 
that's not a permission problem.


--
Ionel GARDAIS
Tech'Advantage CIO - IT Team manager

--
*De: *"Ionel GARDAIS" 
*À: *"Aleksandar Lazic" 
*Cc: *"haproxy" 
*Envoyé: *Mardi 7 Avril 2020 20:23:42
*Objet: *Re: [*EXT*] Re: 503 SC with fcgi

Nothing.
Quiet as the streets during COVID-19.

--
Ionel GARDAIS
Tech'Advantage CIO - IT Team manager

------
*De: *"Aleksandar Lazic" 
*À: *"Ionel GARDAIS" 
*Cc: *"haproxy" 
*Envoyé: *Mardi 7 Avril 2020 17:58:24
*Objet: *[*EXT*] Re: 503 SC with fcgi

What's in the php error log?


Apr 7, 2020 5:18:11 PM Ionel GARDAIS :

Hi,

I'm giving a try to FCGI.
I'm running 2.1.3-3 on debian.
I follow 
https://www.haproxy.com/fr/blog/load-balancing-php-fpm-with-haproxy-and-fastcgi/

Here are the relevant parts of the config :

acl to-slash path /
acl static_content path_end .ai .css .eot .gz .html .ico .js .json .png 
.svg .ts .ttf .woff .woff2

use_backend bck-speed-fpm if host-speedtest !to-slash !static_content
default_backend bck-nginx

backend bck-speed-fpm
     use-fcgi-app speedtest-fpm
     server fpm /run/php/speedtest-fpm.sock proto fcgi

fcgi-app speedtest-fpm
     log-stderr global
     docroot /
     index index.php
     path-info ^(/.+\.php)(/.*)?$
#   path-info ^(.+?\.php)(/.*)$

FPM runs in a chroot hence the docroot to /
Nginx serves correctly static files as expected.
Non-static files are handled by the bcc-speed-fpm backend but all I got is

ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 1/1/0/0/3 0/0 "GET 
/info.php HTTP/1.1"

haproxy user is part of the www-data group who owns the fpm socket.

Am I missing something ?

Thanks,
Ionel
-- 
232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON

Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301










Re: 503 SC with fcgi

2020-04-07 Thread Aleksandar Lazic
What's in the php error log?


Apr 7, 2020 5:18:11 PM Ionel GARDAIS :

> Hi,
> 
> I'm giving a try to FCGI.
> I'm running 2.1.3-3 on debian.
> I follow 
> https://www.haproxy.com/fr/blog/load-balancing-php-fpm-with-haproxy-and-fastcgi/
> 
> Here are the relevant parts of the config :
> 
> acl to-slash path /
> acl static_content path_end .ai .css .eot .gz .html .ico .js .json .png .svg 
> .ts .ttf .woff .woff2
> 
> use_backend bck-speed-fpm if host-speedtest !to-slash !static_content
> default_backend bck-nginx
> 
> backend bck-speed-fpm
> use-fcgi-app speedtest-fpm
> server fpm /run/php/speedtest-fpm.sock proto fcgi
> 
> fcgi-app speedtest-fpm
> log-stderr global
> docroot /
> index index.php
> path-info ^(/.+\.php)(/.*)?$
> # path-info ^(.+?\.php)(/.*)$
> 
> FPM runs in a chroot hence the docroot to /
> Nginx serves correctly static files as expected.
> Non-static files are handled by the bcc-speed-fpm backend but all I got is
> 
> ft-secure~ bck-speed-fpm/fpm 0/0/-1/-1/0 503 9965 - - SC-- 1/1/0/0/3 0/0 "GET 
> /info.php HTTP/1.1"
> 
> haproxy user is part of the www-data group who owns the fpm socket.
> 
> Am I missing something ?
> 
> Thanks,
> Ionel
> -- 
> 232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
> Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301
> 
> 




Re: [RFC] Consistent Hashing for Replica Sharding

2020-04-06 Thread Aleksandar Lazic

Hi.

On 03.04.20 09:16, Dario Di Pasquale wrote:
Hi! I write on behalf of Immobiliare.it, an Italian company leader in the real 
estate services and advertising market, we are using almost exclusively HAProxy 
for our load-balancing. In particular, we are using a patched version of HAProxy 
to balance requests to our cache servers. Long story short, we set up an 
improved version of the replicated, sharded cache pattern: each server both 
maintains its cache entries and a subset of entries of the other servers. Doing 
that, we ensure that a request for an entity can be sent either to its server 
(the one provided by the HAProxy's consistent hashing algorithm) or to its 
second server (the next server in the tree after the one chosen by the 
consistent hashing algorithm), so we can ensure that all the entries are still 
present if a single server crashes, and when many server crashes, only a little 
subset of those entries were lost. To do that we created a new consistent 
hashing algorithm (consistent-2x) that, once found the server 
that has to serve the request, also looks for another server in the tree 
(which should reside on different host) and sends the request either to the 
first or to the second in a random fashion, still considering server's weight.


The whole implementation of the above mentioned algorithm requires the 
following changes:

the tree we build is not driven by weights but it is static, so if a server 
crashes HAProxy does not re-balance requests (consistent-2x algorithm does this); 
weights are used instead to chose one of the two selected servers; servers' IDs 
should follow a given pattern: servers on the same machine should give the same 
result when their IDs is divided by 1000; consistent-2x algorithm. We are looking 
forward to hearing from you. We wish our contribution could be useful for as many people as possible.


Sounds quit interesting. I would help when you share the patch and start the
discussion about that RFE (Request for enhancement)


Best,

Dario Di Pasquale

Immobiliare.it



Regards
Aleks



Re: SameSite=None for persistent session cookie, problem with old browsers

2020-04-02 Thread Aleksandar Lazic

Hi.

On 02.04.20 09:36, Matthias Zepf wrote:

Hi,

for a client we develop a web shop application that handles payment by 
redirecting the user to a page of a payment service provider. After successful 
(or failed) payment the user is redirected back to our application with a post 
request. With Chrome 80 this began to be a problem because on cross-domain post 
requests the cookies are no longer transmitted. This can be fixed by setting 
SameSite=None on the cookies, what we did (also for the haproxy persistent 
session cookie) and it works fine.

But there is a new problem: old browsers, especially Safari on macOS < 10.15 and 
iOS < 13. These browsers do not know of the value “None” for parameter “SameSite” 
and treat unknown values as “Strict”. So, no cookies for these browsers on the 
cross-domain post request.

For the web application we fixed this by adding 2 cookies, one with 
SameSite=None and another (“legacy” cookie) without SameSite parameter.

Any ideas on how to handle this problem for haproxy?


Just an idea.

You can try to use 2 backends as the cookie statement can be set per backend.

use_backend leagcy_clients if { req.hdr(user-agent) -m sub ios } # or what ever 
the UA string is
use_backend new_clients if !{ req.hdr(user-agent) -m sub ios } # or what ever 
the UA string is

Examples are from here 
https://www.haproxy.com/blog/introduction-to-haproxy-acls/

This will be be changed when the UA is gone which is the plan from google.

https://www.zdnet.com/article/google-to-phase-out-user-agent-strings-in-chrome/
https://wicg.github.io/ua-client-hints/



Thanks
Matthias


Regards
Aleks



Re: testing and validating complex haproxy.conf rules

2020-04-01 Thread Aleksandar Lazic

Hi Dave

On 01.04.20 00:36, Dave Cottlehuber wrote:

On Tue, 31 Mar 2020, at 07:53, Aleksandar Lazic wrote:

Hi Dave.

On 31.03.20 09:24, Dave Cottlehuber wrote:

hi all,

Our main haproxy.conf has practically become sentient... it's reached the
point where the number of url redirects and similar incantations is very
hard to reason about, and certainly not test or validate, until it's
shipped. In fact I deploy to a "B" cluster node, and verify most changes
on a spare production node. This is not always possible to ensure that
existing acls and url redirects aren't broken by the changes.

For example:

https://%[hdr(host)]%[url,regsub(/$,)] ...

didn't do what the person who deployed it thinks it does - easy enough to
fix. How could we have tested this locally before committing it?

Is there any easy-ish way to try out these rules, almost like you
could in a REPL?

Once we've written them, and committed them to our ansible repos, is there
any way to unit test the whole config, to avoid regressions?

90% of these commits relate to remapping and redirecting urls from patterns.


Please can you tell us which version of HAProxy and some more details
from the config.
Maybe you can split the redirects, for example can you use a map for
the host part.


thanks Aleks,

In this case it's haproxy 2.1, and the config is complex.

This is a generic problem, not one for a single rule -- I need to find a way
to enable other people "unit test" their changes, before committing, and,
once committed, to avoid breaking production, be able to validate that the
most recent change doesn't break existing functions (more unit tests but
over the whole config). I can spin up a full staging environment if
necessary but I'm hoping somebody has a clever hack to avoid this.

Our newer stuff looks a bit like this with a map file:

   http-requestredirect  code 301  location 
%[capture.req.uri,map(/usr/local/etc/haproxy/redirects.map)] if { 
capture.req.uri,map(/usr/local/etc/haproxy/redirects.map) -m found }

but there are hundreds of acls that can overlap, or even override the 
straightforward logic of the map. That's what I need to find a way to deal with.


Well I think that you reach the limit of a static config file.

How about to use some filters like SPOE or Fcgi-app
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#9

I assume from the commits in 2.2 that there will be a filter possibility with 
LUA but I don't know how usable or complete it will be.

Maybe a staging environment isn't such a bad idea ;-)


A+
Dave



Regards
Aleks



Re: testing and validating complex haproxy.conf rules

2020-03-31 Thread Aleksandar Lazic

Hi Dave.

On 31.03.20 09:24, Dave Cottlehuber wrote:

hi all,

Our main haproxy.conf has practically become sentient... it's reached the
point where the number of url redirects and similar incantations is very
hard to reason about, and certainly not test or validate, until it's
shipped. In fact I deploy to a "B" cluster node, and verify most changes
on a spare production node. This is not always possible to ensure that
existing acls and url redirects aren't broken by the changes.

For example:

https://%[hdr(host)]%[url,regsub(/$,)] ...

didn't do what the person who deployed it thinks it does - easy enough to
fix. How could we have tested this locally before committing it?

Is there any easy-ish way to try out these rules, almost like you
could in a REPL?

Once we've written them, and committed them to our ansible repos, is there
any way to unit test the whole config, to avoid regressions?

90% of these commits relate to remapping and redirecting urls from patterns.


Please can you tell us which version of HAProxy and some more details from the 
config.
Maybe you can split the redirects, for example can you use a map for the host 
part.


A+
Dave


Regards
Aleks



Re: [RFC] BUG/MEDIUM: Checks: support for HTTP health checks with POST and data corrupted by extra connection close

2020-03-26 Thread Aleksandar Lazic

On 26.03.20 09:42, Willy Tarreau wrote:

On Thu, Mar 26, 2020 at 09:25:31AM +0100, Christopher Faulet wrote:

It is a good idea. For now, I have only few idea though. For the header
part, it must not be a raw block. Because it will be really hard to keep the
same syntax with the HTX. I propose to keep same syntax than http-request
rules :

http-check add-header  

And later :

http-check set-header  
http-check del-header 
http-check replace-header   
http-check replace-value   


I don't think we'll need to perform such operations in health checks because
they're normally only useful for real traffic. Here everything is built from
scratch so deleting a header or replacing it will be of very limited use, it
just means they would have been added by a previous rule. Also thinking about
this when combined with multiple requests gives me a headache :-)


Well I think that could be required when the backend have different vhosts
and ssl-vhosts.

Can we add request|respone to make it a little bit clear when which header will
be set.

# set session cookie for example
http-check request set-header  |
http-check request set-sni  |

http-check response expect 
# I'm not sure if there is a use case for that
http-check response del-header 


What I like is to be able to define a check backend and use the track keyword
for checks.

http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-track

Will this be also available in the first implementation?


Another idea could be to have a syntax similar to the HTTP return action :

http-check hdr   hdr   string 

or


why not both => AND but only one is usable in one check sequence?


http-check hdr   hdr   file 


These ones do seem like good ideas. They continue to allow to fully
define a check in a simple statement. And maybe later when we start
to think about sending check sequences they will be very convenient
because each line could define an entire request (possibly reusing
parts from the previous response if needed).


The request URI is passed on the "option httpchk" line. But, maybe the
method, the path and the version may also be defined on such line.


Agreed.


There are many other problems to deal with. But it is a first step. The
content-length must be automatically deduced when a body is defined.


Fully agreed!


For
current versions, the connection header must also be added the same way it
is done today.


Sure. I'd say it's a "detail" in that it's not visible from the users' end.


During my refactoring, I can try to first work on a way to support such
syntax on current versions. But we must be sure to have the right syntax
first. It is the harder part :)


Yes, that's exactly my point as well. If we figure the long-term picture,
we can narrow it down for a first implementation.

Willy






Re: LogParser friendly logs

2020-03-20 Thread Aleksandar Lazic



On 20.03.20 14:06, Илья Шипицин wrote:

I am familiar with custom formats.
what I mean is (sample from IIS log)

so I can query it like "select * from ... where sc-status=200"  without prior knowledge 
what field "sc-status" is (format might change from file to file)

also, I guess log exporters may take advantage from it.


#Software: Microsoft Internet Information Services 8.5
#Version: 1.0
#Date: 2017-06-26 13:09:21
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username 
c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status 
time-taken
2017-06-26 13:09:21 192.168.183.152 GET / - 808 - 10.33.41.142 - - 200 0 64 
11451
2017-06-26 13:09:21 192.168.183.152 GET / - 808 - 10.33.41.142 - - 200 0 0 2378
2017-06-26 13:11:23 192.168.183.152 GET /favicon2.iso - 808 - 10.33.41.142 
Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 1
2017-06-26 13:11:23 192.168.183.152 GET /favicon.iso - 808 - 10.33.41.142 
Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 2


Ah okay, well as far as I know is this not implemented.


пт, 20 мар. 2020 г. в 17:54, Aleksandar Lazic mailto:al-hapr...@none.at>>:

Hi.

On 20.03.20 13:15, Илья Шипицин wrote:
 > Hello,
 >
 > there's Microsoft LogParser.
 > good thing about it, it likes self-consistent CSV logs (or TSV), when 
first line is fields.
 >
 > it helps to change log format on the fly (for example, in IIS), so IIS 
starts new log once format is changed.
 >
 > you can query such logs without prior knowledge of fields, because 
fields are in log.
 >
 > I did not find such possibility in haproxy. is it supported ?
 > or must yet to be done.

You can create any custom log format which you want with `log-format` 
directive.

https://cbonte.github.io/haproxy-dconv/2.1/configuration.html#8.2.4
https://www.haproxy.com/blog/haproxy-log-customization/

Do this answer your question?

 > Cheers,
 > Ilya Shipitcin

Regards
Aleks






Re: LogParser friendly logs

2020-03-20 Thread Aleksandar Lazic

Hi.

On 20.03.20 13:15, Илья Шипицин wrote:

Hello,

there's Microsoft LogParser.
good thing about it, it likes self-consistent CSV logs (or TSV), when first 
line is fields.

it helps to change log format on the fly (for example, in IIS), so IIS starts 
new log once format is changed.

you can query such logs without prior knowledge of fields, because fields are 
in log.

I did not find such possibility in haproxy. is it supported ?
or must yet to be done.


You can create any custom log format which you want with `log-format` directive.

https://cbonte.github.io/haproxy-dconv/2.1/configuration.html#8.2.4
https://www.haproxy.com/blog/haproxy-log-customization/

Do this answer your question?


Cheers,
Ilya Shipitcin


Regards
Aleks



Re: [PATCH]: BUILD link to lib atomic on ARM

2020-03-15 Thread Aleksandar Lazic

On 15.03.20 11:33, David CARLIER wrote:

Hi

Here a little patch proposal to fix build on ARM.

Regards.


Ähm, maybe my mail client hide the Patch because I can't see it ;-)?

Regards
Aleks



Re: s390x and HAProxy?

2020-03-13 Thread Aleksandar Lazic
Mar 13, 2020 12:11:16 PM Илья Шипицин :

> initial motivation was that article
>
> https://docs.travis-ci.com/user/multi-cpu-architectures

Thanks for the link.

> travis allows to run builds on various archs, why not to test ?

Full Agreeing. ;-)

Would be interesting if anyone use it on Host or power.

> пт, 13 мар. 2020 г. в 16:07, Aleksandar Lazic < al-hapr...@none.at >:
>
>
> > Hi.
> >
> > I'm wondering that this target is tested.
> > http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=d726386421dcd184ca2518d17332f82e9cd79f2d
> >
> > Are there really user which runs HAProxy on Host? 8-O
> > How perform HAProxy on that platform?
> >
> > Regards
> > Aleks





s390x and HAProxy?

2020-03-13 Thread Aleksandar Lazic
Hi.

I'm wondering that this target is tested.
http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=d726386421dcd184ca2518d17332f82e9cd79f2d

Are there really user which runs HAProxy on Host? 8-O
How perform HAProxy on that platform?

Regards
Aleks





Re: Let's Encrypt ca-file for check-ssl on server line

2020-03-03 Thread Aleksandar Lazic

Hi all.

Thanks for help.

Regards
Aleks

On 02.03.20 23:25, Tim Düsterhus wrote:

Aleks,

Am 02.03.20 um 23:19 schrieb Aleksandar Lazic:

I think I found the solution.

```
curl -vO https://letsencrypt.org/certs/isrgrootx1.pem.txt
curl -vo https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt
curl -vO https://letsencrypt.org/certs/letsencryptauthorityx3.pem.txt
cat letsencryptauthorityx3.pem.txt lets-encrypt-x3-cross-signed.pem.txt
isrgrootx1.pem.txt > /etc/haproxy/letsencryptauthorityx3.pem
```

Now the server line is this.

```
server static_stor storage.sbg.cloud.ovh.net:443 resolvers mydns check
check-ssl check-sni storage.sbg.cloud.ovh.net sni
str(storage.sbg.cloud.ovh.net) ca-file
/etc/haproxy/letsencryptauthorityx3.pem backup

```

No more SSL Handshake errors.



Yes. The certificate chain OVH uses is the one chaining to IdenTrust
(DST), not the one to ISRG. You can easily check this by looking up the
TLS details within your favorite web browser.

Best regards
Tim Düsterhus






Re: Let's Encrypt ca-file for check-ssl on server line

2020-03-02 Thread Aleksandar Lazic

On 02.03.20 22:52, Aleksandar Lazic wrote:

Hi Lukas.

On 02.03.20 22:38, Lukas Tribus wrote:

Hello Aleks,


On Mon, 2 Mar 2020 at 22:21, Aleksandar Lazic  wrote:

check-ssl check-sni str("storage.sbg.cloud.ovh.net")


For the health check it's:
check-sni storage.sbg.cloud.ovh.net

(not a expression as per the doc: check-sni )


and for the traffic:
sni str(storage.sbg.cloud.ovh.net)

(as per the doc which says: sni )


You need both.


Thank you. I have changed the server line but still handshake error.
I think that the ca-file is wrong, I haven't found anything what's the proper 
ca-file for letsencrypt is.

```
server static_stor storage.sbg.cloud.ovh.net:443 resolvers mydns check 
check-ssl check-sni storage.sbg.cloud.ovh.net sni 
str(storage.sbg.cloud.ovh.net) ca-file 
/etc/letsencrypt/live/lb1.panomax.com/fullchain.pem backup
```

The same error with /etc/ssl/certs/ISRG_Root_X1.pem

```
Mar  2 22:48:59 lb1 haproxy[19551]: [WARNING] 061/224859 (19553) : Backup Server 
be_static/static_stor is DOWN, reason: Socket error, info: "SSL handshake 
failure", check duration: 16ms. 0 active and 0 backup servers left. 0 sessions 
active, 0 requeued, 0 remaining in queue.
Mar  2 22:48:59 lb1 haproxy[19553]: Backup Server be_static/static_stor is DOWN, reason: 
Socket error, info: "SSL handshake failure", check duration: 16ms. 0 active and 
0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
```


I think I found the solution.

```
curl -vO https://letsencrypt.org/certs/isrgrootx1.pem.txt
curl -vo https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt
curl -vO https://letsencrypt.org/certs/letsencryptauthorityx3.pem.txt
cat letsencryptauthorityx3.pem.txt lets-encrypt-x3-cross-signed.pem.txt 
isrgrootx1.pem.txt > /etc/haproxy/letsencryptauthorityx3.pem
```

Now the server line is this.

```
server static_stor storage.sbg.cloud.ovh.net:443 resolvers mydns check 
check-ssl check-sni storage.sbg.cloud.ovh.net sni 
str(storage.sbg.cloud.ovh.net) ca-file /etc/haproxy/letsencryptauthorityx3.pem 
backup

```

No more SSL Handshake errors.


Lukas


Regards
Aleks



Re: Let's Encrypt ca-file for check-ssl on server line

2020-03-02 Thread Aleksandar Lazic

Hi Lukas.

On 02.03.20 22:38, Lukas Tribus wrote:

Hello Aleks,


On Mon, 2 Mar 2020 at 22:21, Aleksandar Lazic  wrote:

check-ssl check-sni str("storage.sbg.cloud.ovh.net")


For the health check it's:
check-sni storage.sbg.cloud.ovh.net

(not a expression as per the doc: check-sni )


and for the traffic:
sni str(storage.sbg.cloud.ovh.net)

(as per the doc which says: sni )


You need both.


Thank you. I have changed the server line but still handshake error.
I think that the ca-file is wrong, I haven't found anything what's the proper 
ca-file for letsencrypt is.

```
server static_stor storage.sbg.cloud.ovh.net:443 resolvers mydns check 
check-ssl check-sni storage.sbg.cloud.ovh.net sni 
str(storage.sbg.cloud.ovh.net) ca-file 
/etc/letsencrypt/live/lb1.panomax.com/fullchain.pem backup
```

The same error with /etc/ssl/certs/ISRG_Root_X1.pem

```
Mar  2 22:48:59 lb1 haproxy[19551]: [WARNING] 061/224859 (19553) : Backup Server 
be_static/static_stor is DOWN, reason: Socket error, info: "SSL handshake 
failure", check duration: 16ms. 0 active and 0 backup servers left. 0 sessions 
active, 0 requeued, 0 remaining in queue.
Mar  2 22:48:59 lb1 haproxy[19553]: Backup Server be_static/static_stor is DOWN, reason: 
Socket error, info: "SSL handshake failure", check duration: 16ms. 0 active and 
0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
```


Lukas


Regards
Aleks



Let's Encrypt ca-file for check-ssl on server line

2020-03-02 Thread Aleksandar Lazic

Hi.

I try to use HA-Proxy version 2.1.3-1ppa1~bionic with Let's Encrypt and 
ssl-check.

My Serverline looks like this

```
server static_stor storage.sbg.cloud.ovh.net:443 resolvers mydns check check-ssl 
check-sni str("storage.sbg.cloud.ovh.net") ca-file 
/etc/ssl/certs/ISRG_Root_X1.pem backup
```

The log entry for the check is this.

```
Mar  2 22:13:17 lb1 haproxy[17027]: Backup Server be_static/static_stor is DOWN, reason: 
Socket error, info: "SSL handshake failure", check dura
tion: 7ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 
remaining in queue.
```

I have tried also the fullchain.pem but nothing works.

I'm sure I miss something simple and hope that someone can help me to fix this.

Best regards

Aleks

```

root@lb1:~# haproxy -vv
HA-Proxy version 2.1.3-1ppa1~bionic 2020/02/14 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.3.html
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-HsURzh/haproxy-2.1.3=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
-Wno-missing-field-initializers -Wno-implicit-fallthrough 
-Wno-stringop-overflow -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
USE_ZLIB=1 USE_SYSTEMD=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT 
+PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +REGPARM 
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
+CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB 
-SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD 
-OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with the Prometheus exporter as a service

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
    fcgi : mode=HTTP   side=BE    mux=FCGI
    : mode=HTTP   side=FE|BE mux=H1
    : mode=TCP    side=FE|BE mux=PASS

Available services :
    prometheus-exporter

Available filters :
    [SPOE] spoe
    [CACHE] cache
    [FCGI] fcgi-app
    [TRACE] trace
    [COMP] compression

```




Re: Enquiry regarding Loadbalancer functionality

2020-02-24 Thread Aleksandar Lazic

Hi Mohamed.

On 22.02.20 18:24, Mohamed Sherif wrote:

Hello,

I am writing to you to enquire if the load balancer is capable of determine the 
target node address and path that address to the source allowing a direct 
communication between the source and destination with load balancer in between


What do you mean with "determine the target node address and path"?
Can you try to draw a ascii picture so that we can understand your request 
better.

Maybe you can find some answers in this blog post.
https://www.haproxy.com/blog/building-blocks-of-a-modern-proxy/


Regards
Mohamed Sherif


Best regards
Aleks



Re: Mirror concepts

2020-02-12 Thread Aleksandar Lazic


Hi.


Feb 13, 2020 1:04:58 AM Panneer Selvam :

> Hi I need quick helping for HAproxy mirroring concepts

Please can you tell us a little bit more what you need and please answer to 
all, thanks.

Have you read an understand the post?
https://www.haproxy.com/blog/haproxy-traffic-mirroring-for-real-world-testing/

> Thanks Panneer

Regards
Aleks




Re: spoa-mirror

2020-02-12 Thread Aleksandar Lazic

Hi Dmitry.

Please keep the mailing-list in the loop, thanks.

On 12.02.20 08:17, Дмитрий Меркулов wrote:

Hello Aleks
Information about version, haproxy and spoa-agent conf in the attachment.


HA-Proxy version 2.1.0 2019/11/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.0.html
...

 conf
...

backend mirroragent
mode tcp
option tcplog
balance roundrobin
timeout connect 5s
timeout server 5s
server agent 0.0.0.0:12345 #check observe layer4 inter 3000 rise 2 fall 2

listen port:10026
mode http
option httplog
bind *:10026
maxconn 1000
filter spoe engine trafficmirror config /etc/haproxy/mirror.cfg
balance roundrobin
###


During the request, in the haproxy logs I see the following strings:
Feb 12 10:13:47 localhost haproxy[6374]: SPOE: [mirroragent] 
 sid=52 st=0 0/0/0/0/0 2/2 0/0 1/8
Feb 12 10:13:47 localhost haproxy[6374]: 194.87.225.137:54385 [12/Feb/2020:10:13:47.615] 
port:10026 port:10026/DPS1 1/0/0/2/3 200 138 - -  1/1/0/0/0 0/0 "GET / 
HTTP/1.1"
If I create 3-4 requests in a row, then in the agent’s logs:
[ 2][   66.224617]   <2:27> (E) Failed to send frame length: Broken pipe


I can't find this message in the code.
https://github.com/haproxytech/spoa-mirror/search?q=Failed+to+send+frame+length_q=Failed+to+send+frame+length

Can you try to build the mirror with debug enabled?
./configure --enable-debug

Can you try to run `strace -fveall -a1024 -s1024 -o spoa-mirror.log spoa-mirror 
`

I don't use the mirror by my self, hopefully someone on the list can help you 
more to debug the issue.



Вторник, 11 февраля 2020, 19:23 +03:00 от Aleksandar Lazic 
:
Hi Dmitry.

On 11.02.20 15:29, Дмитрий Меркулов wrote:
 > Good day!
 > Could you help with the setup spoa-mirror v1.2.1?
 > SPOE: [mirroragent]  sid=0 st=0 
0/0/0/0/0 1/1 0/0 0/1
 > I see the backend sends data to the agent, but the agent does not 
broadcast anything to the destination server.
 > I run the agent with the following command
 > spoa-mirror --runtime 0 -u http://***.***.***.**:*/  --logfile 
W:/var/log/haproxy-mirror.log -n 1 -i 2s -b 30
 > Sometimes in the log I get the following error:
 > [ 1][  110.567823]   <7:10> (E) Failed to send frame length: Broken pipe
 > I would be very grateful for your help.

Which version of haproxy do you use?
haproxy -vv

What's your haproxy config?
Do you have any logs from haproxy?

Please don't send Screenshots because they are not visible in text only 
mailers, thanks.

 > --
 > Dmitry Merkulov

Regards
Aleks

-- Dmitry Merkulov





Re: spoa-mirror

2020-02-11 Thread Aleksandar Lazic

Hi Dmitry.

On 11.02.20 15:29, Дмитрий Меркулов wrote:

Good day!
Could you help with the setup spoa-mirror v1.2.1?
SPOE: [mirroragent]  sid=0 st=0 0/0/0/0/0 
1/1 0/0 0/1
I see the backend sends data to the agent, but the agent does not broadcast 
anything to the destination server.
I run the agent with the following command
spoa-mirror --runtime 0 -u http://***.***.***.**:*/  --logfile 
W:/var/log/haproxy-mirror.log -n 1 -i 2s -b 30
Sometimes in the log I get the following error:
[ 1][  110.567823]   <7:10> (E) Failed to send frame length: Broken pipe
I would be very grateful for your help.


Which version of haproxy do you use?
haproxy -vv

What's your haproxy config?
Do you have any logs from haproxy?

Please don't send Screenshots because they are not visible in text only 
mailers, thanks.


--
Dmitry Merkulov


Regards
Aleks



Re: Configuring HAProxy

2020-02-10 Thread Aleksandar Lazic

Dear Akshay Mangla.

On 10.02.20 06:00, Akshay Mangla wrote:

Hi Aleksandar,

I have made a few changes to the haproxy.cfg file and following are the outputs 
:-

HAPROXY.cfg
#-


[snipped]


frontend haproxy_inbound
         bind *:443 *[CHANGED PORT]*
         default_backend haproxy_httpd


Please read this blog post to setup ssl in haproxy.
https://www.haproxy.com/blog/haproxy-ssl-termination/


backend haproxy_httpd
         balance roundrobin
         mode http #(NOT NEEDED IF DEFINED IN DEFAULTS)
         option httpchk
         server lxapp14070.dc.corp.telstra.com 10.195.70.12:443 check * [Host 
and Port Changed]*
         server lxapp14071.dc.corp.telstra.com 10.195.70.13:443 check *[Host 
and Port Changed] *


try to add "ssl" to the server line.


1.*curl -v --max-time 30 127.0.0.1:5001*

[root@lxapp14012 ~]# curl -v --max-time 30 127.0.0.1:5001 
<http://127.0.0.1:5001>
* About to connect() to 127.0.0.1 port 5001 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:5001; Connection refused
* Closing connection 0
curl: (7) Failed connect to 127.0.0.1:5001; Connection refused


Does anything listen on that port?
https://en.wikipedia.org/wiki/Localhost


2. *curl -v --max-time 30 10.195.70.12:443*


to test https with curl you should add 'https://' before the URL

[snipped]


Also now when I run the command haproxy -db -f /etc/haproxy/haproxy.cfg i 
getting the following alert :-

*[root@lxapp14012 haproxy]# haproxy -db -f /etc/haproxy/haproxy.cfg
[ALERT] 040/155059 (20285) : Starting frontend haproxy_inbound: cannot bind 
socket [0.0.0.0:443]*

Is it something that should be taken care of or it can be ignored??


This isn't a serious question isn't it?
https://www.startpage.com/do/search?lui=english=english=web=could+not+bind+socket

Please check if there isn't another process running on this port.


Also when I try to check the status of haproxy i see many failed or disabled 
instances and the haproxy instance is not able to start properly:-

[root@lxapp14012 haproxy]# *service haproxy status -l*

Redirecting to /bin/systemctl status  -l haproxy.service
haproxy.service - HAProxy Load Balancer
    Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor 
preset: disabled)
    Active: failed (Result: exit-code) since Thu 2020-02-06 23:04:08 AEDT; 3 
days ago
   Process: 15069 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, 
status=0/SUCCESS)
   Process: 26084 ExecStart=/usr/sbin/haproxy-systemd-wrapper -f 
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS (code=exited, 
status=1/FAILURE)
  Main PID: 26084 (code=exited, status=1/FAILURE)

Feb 06 23:04:08 lxapp14012 systemd[1]: Starting HAProxy Load Balancer...
Feb 06 23:04:08 lxapp14012 haproxy-systemd-wrapper[26084]: 
haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f 
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Feb 06 23:04:08 lxapp14012 haproxy-systemd-wrapper[26084]: [ALERT] 036/230408 (26086) 
: Starting frontend haproxy_inbound: cannot bind socket [0.0.0.0:443 
<http://0.0.0.0:443>]
Feb 06 23:04:08 lxapp14012 haproxy-systemd-wrapper[26084]: 
haproxy-systemd-wrapper: exit, haproxy RC=1
Feb 06 23:04:08 lxapp14012 systemd[1]: haproxy.service: main process exited, 
code=exited, status=1/FAILURE
Feb 06 23:04:08 lxapp14012 systemd[1]: Unit haproxy.service entered failed 
state.
Feb 06 23:04:08 lxapp14012 systemd[1]: haproxy.service failed.
Feb 06 23:04:24 lxapp14012 systemd[1]: Unit haproxy.service cannot be reloaded 
because it is inactive.
Feb 06 23:07:29 lxapp14012 systemd[1]: Unit haproxy.service cannot be reloaded 
because it is inactive.
Feb 06 23:14:40 lxapp14012 systemd[1]: Unit haproxy.service cannot be reloaded 
because it is inactive.

Can you please look into this and help us in finding the solution??


I would suggest to get some Linux courses to understand what these messages 
means, something like this, as you use a RHEL bases system.
https://www.redhat.com/en/services/training/rh124-red-hat-system-administration-i


Also if you are available is it possible to connect sometime and resolve these 
issue in one go??


Well it looks to me that you don't want to pay some support I don't think that 
I will connect to your machines.
If you are willing to pay for support I suggest to contact 
https://www.haproxy.com/ for a offer.


Regards,
Akshay


Regards
Aleks


On Sun, Feb 9, 2020 at 10:54 PM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Hi.

please keep the mailinglist in the loop.

On 06.02.20 10:23, Akshay Mangla wrote:
 > Hi Aleksandar,
 >
 > Apologies for sending in the screenshot.

No probs just a hint.

 > I got the following output when I ran the above commands :-
 >
 > *1.curl -v --max-time 30 http://127.0.0.1:5001/*
 >
 > [root@lxapp14012 ~]# curl -v --max-time 30 127.0.0.1:5001 
<http:/

Re: Configuring HAProxy

2020-02-09 Thread Aleksandar Lazic

Hi.

please keep the mailinglist in the loop.

On 06.02.20 10:23, Akshay Mangla wrote:

Hi Aleksandar,

Apologies for sending in the screenshot.


No probs just a hint.


I got the following output when I ran the above commands :-

*1.curl -v --max-time 30 http://127.0.0.1:5001/*

[root@lxapp14012 ~]# curl -v --max-time 30 127.0.0.1:5001 
<http://127.0.0.1:5001>
* About to connect() to 127.0.0.1 port 5001 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:5001 <http://127.0.0.1:5001>; Connection refused
* Closing connection 0
curl: (7) Failed connect to 127.0.0.1:5001 <http://127.0.0.1:5001>; Connection 
refused


Okay you should remove the "backend app" it looks like you don't need it.


*2. curl -v --max-time 30 http://10.195.77.21:7068*
*
*
* About to connect() to 10.195.77.21 port 7068 (#0)
*   Trying 10.195.77.21...
* Connected to 10.195.77.21 (10.195.77.21) port 7068 (#0)
 > GET / HTTP/1.1
 > User-Agent: curl/7.29.0
 > Host: 10.195.77.21:7068 <http://10.195.77.21:7068>
 > Accept: */*
 >
* Connection #0 to host 10.195.77.21 left intact*
*

*3.curl -v --max-time 30 http://10.195.77.22:7068*
*
*
* About to connect() to 10.195.77.22 port 7068 (#0)
*   Trying 10.195.77.22...
* Connected to 10.195.77.22 (10.195.77.22) port 7068 (#0)
 > GET / HTTP/1.1
 > User-Agent: curl/7.29.0
 > Host: 10.195.77.22:7068 <http://10.195.77.22:7068>
 > Accept: */*
 >
* Connection #0 to host 10.195.77.22 left intact*
*

*Following is the version of HAProxy*



[root@lxapp14012 ~]# haproxy -vv
HA-Proxy version 1.5.18 2016/05/10


[snipp]

Thanks. you sholuld consider to update it to the latest version.


*Also the outputs of the screenshot sent earlier is as below :-*

[root@lxapp14012 ~]# haproxy -c -f /etc/haproxy/haproxy.cfg
Configuration file is valid

[root@lxapp14012 ~]# haproxy -db -f /etc/haproxy/haproxy.cfg
[WARNING] 036/201733 (14778) : Server static/static is DOWN, reason: Layer4 connection 
problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 036/201733 (14778) : backend 'static' has no server available!
[WARNING] 036/201733 (14778) : Server app/app1 is DOWN, reason: Layer4 connection 
problem, info: "Connection refused", check duration: 0ms. 3 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 036/201734 (14778) : Server app/app2 is DOWN, reason: Layer4 connection 
problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 036/201734 (14778) : Server app/app3 is DOWN, reason: Layer4 connection 
problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 036/201734 (14778) : Server app/app4 is DOWN, reason: Layer4 connection 
problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 036/201734 (14778) : backend 'app' has no server available!


Yes clear there are no servers on the localhost.


[WARNING] 036/201734 (14778) : Server haproxy_httpd/lxapp14058.dc.corp.telstra.com 
<http://lxapp14058.dc.corp.telstra.com> is DOWN, reason: Layer7 invalid response, info: 
"<15><03><03>", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions 
active, 0 requeued, 0 remaining in queue.
[WARNING] 036/201735 (14778) : Server haproxy_httpd/lxapp14059.dc.corp.telstra.com 
<http://lxapp14059.dc.corp.telstra.com> is DOWN, reason: Layer7 invalid response, info: 
"<15><03><03>", check duration: 2ms. 0 active and 0 backup servers left. 0 sessions 
active, 0 requeued, 0 remaining in queue.
[ALERT] 036/201735 (14778) : backend 'haproxy_httpd' has no server available!


Looks like the backend expect https or tcp.

Which protocol expect the servers lxapp*.dc.corp.telstra.com ?


Regards,
Akshay


Regards
Aleks


On Thu, Feb 6, 2020 at 1:43 PM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Hi.

On 06.02.20 07:08, Akshay Mangla wrote:
 > Hi HAProxy Team,
 >
 > I have been trying to install HAProxy on my vm machine and facing some 
difficulties in doing so.
 >
 > Following is the HAProxy config file that we have currently.
 >
 > #-
 > # Example configuration for a possible web application.  See the
 > # full configuration options online.
 > #
 > # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
 > #
 > #-
 >
 > 

Re: Configuring HAProxy

2020-02-06 Thread Aleksandar Lazic

Hi.

On 06.02.20 07:08, Akshay Mangla wrote:

Hi HAProxy Team,

I have been trying to install HAProxy on my vm machine and facing some 
difficulties in doing so.

Following is the HAProxy config file that we have currently.

#-
# Example configuration for a possible web application.  See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#-

#-
# Global settings
#-
global
     # to have these messages end up in /var/log/haproxy.log you will
     # need to:
     #
     # 1) configure syslog to accept network log events.  This is done
     #    by adding the '-r' option to the SYSLOGD_OPTIONS in
     #    /etc/sysconfig/syslog
     #
     # 2) configure local2 events to go to the /var/log/haproxy.log
     #   file. A line like the following can be added to
     #   /etc/sysconfig/syslog
     #
     #    local2.*                       /var/log/haproxy.log
     #
     log         127.0.0.1 local2

     chroot      /var/lib/haproxy
     pidfile     /var/run/haproxy.pid
     maxconn     4000
     user        haproxy
     group       haproxy
     daemon

     # turn on stats unix socket
     stats socket /var/lib/haproxy/stats

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
     mode                    http
     log                     global
     option                  httplog
     option                  dontlognull
     option http-server-close
     option forwardfor       except 127.0.0.0/8
     option                  redispatch
     retries                 3
     timeout http-request    10s
     timeout queue           1m
     timeout connect         10s
     timeout client          1m
     timeout server          1m
     timeout http-keep-alive 10s
     timeout check           10s
     maxconn                 3000

#-
# main frontend which proxys to the backends
#-
frontend  main *:5000
     acl url_static       path_beg       -i /static /images /javascript 
/stylesheets
     acl url_static       path_end       -i .jpg .gif .png .css .js

     use_backend static          if url_static
     default_backend             app

#-
# static backend for serving up images, stylesheets and such
#-
backend static
     balance     roundrobin
     server      static 127.0.0.1:4331 check

#-
# round robin balancing between the various backends
#-
backend app
     balance     roundrobin
     server  app1 127.0.0.1:5001 check
     server  app2 127.0.0.1:5002 check
     server  app3 127.0.0.1:5003 check
     server  app4 127.0.0.1:5004 check

frontend haproxy_inbound
         bind *:7068
         default_backend haproxy_httpd

backend haproxy_httpd
         balance roundrobin
         mode http #(NOT NEEDED IF DEFINED IN DEFAULTS)
         option httpchk
         server lxapp14058.dc.corp.telstra.com 10.195.77.21:7068 check
         server lxapp14059.dc.corp.telstra.com 10.195.77.22:7068 check


I have added the lines at the end which are colored and ran the command ---> 
*/haproxy -c -f /etc/haproxy/haproxy.cfg/* which gave me an output that 
/*configuration file is valid*/.

When i tried to start it manually (in foreground, to test) with ---> */haproxy 
-db -f /etc/haproxy/haproxy.cfg/* it started giving me an error
image.png


I love screenshots, it's so easy to copy some text out of them ;-).
My suggestion would be to copy the text from the console to the mail
instead the screenshot.


Can you help me resolve this issue as I am stuck on this. Any suggestions would 
be appreciated.


I would assume that the backend is not a http backend as the httpchk fails.
What do you get when you execute the follwoing command from haproxy maschine?

curl -v --max-time 30 127.0.0.1:5001
curl -v --max-time 30 http://10.195.77.21:7068
curl -v --max-time 30 http://10.195.77.22:7068


Do let me know if you need any further information on this.


Which haproxy version do you use?
haproxy -vv



Regards,
Akshay


Regards
Aleks



Re: Enhancement plugin feature for haproxy

2020-01-25 Thread Aleksandar Lazic



On 22.01.20 08:23, Willy Tarreau wrote:

On Mon, Jan 20, 2020 at 10:06:20PM +0100, Christopher Faulet wrote:

Nuster evolves in parallel of
HAProxy. It is a fork of it, it is not a patchset on top of it. The nuster
developer never tried to make its project compatible with HAProxy. Or at
least, he never asked anything on the mailing list. So, of course, it is its
own right. There is no problem with that. But it could be a good idea to
have a better view on how the project will evolve to evaluate the pertinence
of a nuster plugin. Honestly, if you ever do the effort to work on it, it
could be better to rewrite all the project as a plugin of HAProxy and not a
fork of it. If so, it is probably a better idea to keep it in a separate
repository though. This will be easier to push new features, without the
burden to submit the patches on the ML.


I totally agree with this. If some minor extensions to haproxy would be
needed to do his thing better (maybe one hook here and there), we could
get them merged to ease his maintenance as long as they have zero impact
on the rest. But given that the project has been evolving for a few years
now, occasionally rebasing on new version, I guess the developer possibly
makes a living from his fork and has no particular interest in seeing his
code more easily used without him being kept in the loop. Of couse I could
be totally wrong but that's what it makes me think of. I don't think we
should bother him. He took care of renaming the project so as to present
it as a self-contained cache and not as a load balancer, and as such he
doesn't cause any harm to the haproxy project. And he seems to be doing
his job right by regularly merging stable branches, so I'm not seeing
anything wrong there that we should encourage to change. If all projects
forks were done this way, we'd all have less crap installed on our machines.

Of course it would be nice if he would participate a bit to the project,
but well, how could we expect any contribution back from anyone using
haproxy ? He will already have a tough work migrating to HTX :-)

Aleks, if you're interested in this project, I think you should contact
him. I suspect he will tell you he only covers the caching part and uses
haproxy only as the HTTP/TCP engine to run his cache, as is visible in
his example config, and that if you want to use a load balancer, he will
invite you to use a separate haproxy process. And that's important given
that he's doing file system accesses (hence high latencies and limited
security)!


Thanks Christopher and Willy for your feedback.
I will go back and think about the possible options ;-)


Regards,
Willy


Regards
Aleks



Re: Recommendations for deleting headers by regexp in 2.x?

2020-01-24 Thread Aleksandar Lazic


+1

Jan 24, 2020 8:28:33 AM Christopher Faulet :

> Le 23/01/2020 à 19:59, James Brown a écrit :
> 
> > I spent a couple of minutes and made the attached (pretty bad) patch to add 
> > a
> > del-header-by-prefix.
> > 
> > 
> 
> Just an idea. Instead of adding a new action, it could be cleaner to extend 
> the
> del-header action adding some keywords. Something like:
> 
> http-request del-header begin-with 
> http-request del-header end-with 
> http-request del-header match 
> 
> It could be also extended to replace-header and replace-value actions.
> 
> Just my 2 cents,
> -- 
> Christopher Faulet
> 




Re: Redirect and rewrite part of query string (using map files)

2020-01-18 Thread Aleksandar Lazic



Jan 18, 2020 2:31:40 PM bjun...@gmail.com :

> Am Samstag, 18. Januar 2020 schrieb Aleksandar Lazic < al-hapr...@none.at 
> [mailto:al-hapr...@none.at] >:
>
> > Hi Bjoern.
> >
> > On 18.01.20 14:02, bjun...@gmail.com [mailto:bjun...@gmail.com] wrote:
> >
> > > Am Samstag, 18. Januar 2020 schrieb Aleksandar Lazic < al-hapr...@none.at 
> > > [mailto:al-hapr...@none.at] >:
> > >
> > > Hi.
> > >
> > > On 18.01.20 13:11, bjun...@gmail.com [mailto:bjun...@gmail.com] wrote:
> > >
> > > Hi,
> > >
> > > i want to redirect the following (the value of the code param should be 
> > > rewritten):
> > >
> > > abc.de/?v=1=1530=3 [http://abc.de/?v=1=1530=3] < 
> > > http://abc.de/?v=1=1530; b=3 [http://abc.de/?v=1=1530=3] > -> 
> > > abc.de/?v=1=6780=3 [http://abc.de/?v=1=6780=3] < 
> > > http://abc.de/?v=1=6780; b=3 [http://abc.de/?v=1=6780=3] > 
> > > abc.it/?v=2=2400=2 [http://abc.it/?v=2=2400=2] < 
> > > http://abc.it/?v=2=2400; b=2 [http://abc.it/?v=2=2400=2] > -> 
> > > abc.it/?v=2=7150=2 [http://abc.it/?v=2=7150=2] < 
> > > http://abc.it/?v=2=7150; b=2 [http://abc.it/?v=2=7150=2] > 
> > > abc.fr [http://abc.fr] < http://abc.fr [http://abc.fr] > ..
> > > abc.se [http://abc.se] < http://abc.se [http://abc.se] > ..
> > > .
> > > .
> > >
> > > When i don't use maps, i can accomplish the task with the following lines 
> > > (but this needs many of those lines):
> > >
> > > http-request set-header X-Redirect-Url %[url,regsub(code=1530,code=67 
> > > 80,g)] if { hdr_reg(host) -i ^abc\.de$ }
> > > http-request set-header X-Redirect-Url %[url,regsub(code=2400,code=71 
> > > 50,g)] if { hdr_reg(host) -i ^abc\.it$ }
> > >
> > > http-request redirect code 302 location https://%[hdr(host)]%[hdr(X-Re 
> > > direct-Url)]
> > >
> > >
> > > But i want to use map files to reduce duplication and make it easier to 
> > > add new items.
> > >
> > > I have these map files (domain is the lookup key):
> > >
> > > desktop_ids.map:
> > > abc.de [http://abc.de] < http://abc.de [http://abc.de] > 1530
> > > abc.it [http://abc.it] < http://abc.it [http://abc.it] > 2400
> > > .
> > > .
> > >
> > > mobile_ids.map:
> > > abc.de [http://abc.de] < http://abc.de [http://abc.de] > 6780
> > > abc.it [http://abc.it] < http://abc.it [http://abc.it] > 7150
> > > .
> > > .
> > >
> > > http-request set-header X-ID-Desktop %[hdr(host),lower,map_str(/etc 
> > > /haproxy/desktop_ids.map)]
> > > http-request set-header X-ID-Mobile %[hdr(host),lower,map_str(/etc 
> > > /haproxy/mobile_ids.map)]
> > >
> > > What i would need is the following:
> > > http-request set-header X-Redirect-Url %[url,regsub(code=%[hdr(X-ID-D 
> > > esktop)],code=%[hdr(X-ID-Mobil e)],g)]
> > >
> > > http-request redirect code 302 location https://%[hdr(host)]%[hdr(X-Re 
> > > direct-Url)]
> > >
> > > But that's not possible, you cannot use variables in the regex or 
> > > substitution field of regsub. I've also tried "http-request 
> > > replace-header", but same problem, you cannot use variables for the 
> > > "match-regex".
> > >
> > > Maybe is it possible to cut the "code" param from the query string and 
> > > append it with the new value to the query string. But this needs some 
> > > complex regex and handling of multiple conditions in query string (plus 
> > > ordering of query string params is different than).
> > >
> > > Is there a possibility to use variables in regsub or in the "match-regex" 
> > > of replace-header?
> > >
> > >
> > > I think you will need a small lua action handler which do the job.
> > > In the blog post are some examples which can help you to reach your 
> > > target.
> > > https://www.haproxy.com/blog/5 -ways-to-extend-haproxy-with-l ua/ 
> > > [https://www.haproxy.com/blog/5-ways-to-extend-haproxy-with-lua/] < 
> > > https://www.haproxy.com/blog/ 5-ways-to-extend-haproxy-with- lua/ 
> > > [https://www.haproxy.com/blog/5-ways-to-extend-haproxy-with-lua/] >
> > >
> > > Maybe you can start with the example on stackoverflow and add the map 
> > > handling
> > > into the code.
> > >
> 

<    1   2   3   4   5   6   7   8   9   10   >