Re: Haproxy 1.7.11 log problems

2019-11-21 Thread Aleksandar Lazic

Am 21.11.2019 um 08:09 schrieb Alexander Kasantsev:

I updated haproxy to 1.7.12 but nothing changed


Okay that's bad, because I thought this commit will fix your issue.
http://git.haproxy.org/?p=haproxy-1.7.git;a=commit;h=777a0aa4c8a704e06d653aed5f00e6cda2017a4d

Regards
Aleks


20 нояб. 2019 г., в 15:38, Aleksandar Lazic  написал(а):


On this page is a 1.7.12 listed, is this the repo which you use?

https://repo.ius.io/6/x86_64/packages/h/

Please can you try the 1.7.12.

Do you know that eol is next year?
https://wiki.centos.org/Download

Regards
Aleks

Nov 20, 2019 12:45:37 PM Alexander Kasantsev :


I’m on CentOS 6.10, the latest version for me is 1.7.11 from ius repo


20 нояб. 2019 г., в 14:17, Aleksandar Lazic

написал(а):



Hi.

Please can you use the latest 1.7, latest 1.8 or 2.0 and tell us if the problem 
still exist.

Best regards
Aleks

Nov 20, 2019 9:52:01 AM Alexander Kasantsev

:



Good day everyone!

I’m migrated from haproxy 1.5 to 1.7.11 and I have some troubles with logging

I have a following in config file for logging

capture request  header Host len 200
capture request  header Referer len 200
capture request  header User-Agent len 200
capture request  header Content-Type len 200
capture request  header Cookie len 300
log-format %[capture.req.hdr(0),lower]\ %ci\ -\ [%t]\ \"%HM\ %HP\ %HV\"\ %ST\ \"%[capture.req.hdr(3)]\"\ %U\ 
\"%[capture.req.hdr(1)]\"\ \"%[capture.req.hdr(2)]\"\ \"%[capture.req.hdr(4)]\"\ %Tq\ \"%s\"\ 'NGINX-CACHE-- 
"-"'\ \"%ts\»


Logformat is almost the same with Nginx

But is some cases it works incorrectly

For example log output

Nov 20 10:41:56 lb.loc haproxy[12633]: example.com 81.4.227.173 - [20/Nov/2019:10:41:56.095] "GET /piwik.php H" 200 "-" 2396 
"https://example.com/"; "Mozilla/5.0" "some.cookie data" 19 "vm06.lb.rsl.loc" NGINX-CACHE-- "-" "—"

Problem is that "GET /piwik.php H"  must be "GET /piwik.php HTTP/1.1"
its %HV parameter in log-format

A part of "HTTP/1.1" randomly cut’s off. It may be "HT" or "HTT" or "HTTP/1."










Re: Haproxy 1.7.11 log problems

2019-11-21 Thread Lukas Tribus
Hello,

On Wed, Nov 20, 2019 at 9:51 AM Alexander Kasantsev  wrote:
>
> Good day everyone!
>
> I’m migrated from haproxy 1.5 to 1.7.11 and I have some troubles with logging
>
> I have a following in config file for logging
>
>   capture request  header Host len 200
>   capture request  header Referer len 200
>   capture request  header User-Agent len 200
>   capture request  header Content-Type len 200
>   capture request  header Cookie len 300
>   log-format %[capture.req.hdr(0),lower]\ %ci\ -\ [%t]\ \"%HM\ %HP\ %HV\"\ 
> %ST\ \"%[capture.req.hdr(3)]\"\ %U\ \"%[capture.req.hdr(1)]\"\ 
> \"%[capture.req.hdr(2)]\"\ \"%[capture.req.hdr(4)]\"\ %Tq\ \"%s\"\ 
> 'NGINX-CACHE-- "-"'\ \"%ts\»
>
>
> Logformat is almost the same with Nginx
>
> But is some cases it works incorrectly
>
> For example log output
>
> Nov 20 10:41:56 lb.loc haproxy[12633]: example.com 81.4.227.173 - 
> [20/Nov/2019:10:41:56.095] "GET /piwik.php H" 200 "-" 2396 
> "https://example.com/"; "Mozilla/5.0" "some.cookie data" 19 "vm06.lb.rsl.loc" 
> NGINX-CACHE-- "-" "—"
>
> Problem is that "GET /piwik.php H"  must be "GET /piwik.php HTTP/1.1"
> its %HV parameter in log-format

By default the uri length in the log is limited to 1024 characters.
The limit can be raised by building with something like
DEFINE=-DREQURI_LEN=2048.

Starting with 1.8, this is configurable without recompiling by using
the tune.http.logurilen directive:

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.2-tune.http.logurilen



Lukas



Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread Christopher Faulet

Le 20/11/2019 à 21:23, William Dauchy a écrit :

Hi Christopher,

On Wed, Nov 20, 2019 at 02:56:28PM +0100, Christopher Faulet wrote:

Nice, Thanks for your feedback. It is merged now. And I'm on the backports
for the 2.0.


You apparently forgot to backport
commit 0d1c2a65e8370a770d01 (MINOR: stats: Report max times in addition of the 
averages for sessions)

2.0 tree does not build anymore because ST_F_QT_MAX is not defined.



Damned ! You're right. I'm sorry. It was backported now. Thanks !

--
Christopher Faulet



Combining (kind of) http and tcp checks

2019-11-21 Thread Christian Ruppert

Hi list,

for an old exchange cluster I have some check listener like:
listen chk_s015023
bind 0.0.0.0:1001
mode http

monitor-uri /check

tcp-request connection reject if { nbsrv lt 6 } { src LOCALHOST 
}

monitor fail if { nbsrv lt 6 }

default-server inter 3s rise 2 fall 3

server s015023_smtp 192.168.15.23:25 check
server s015023_pop3 192.168.15.23:110 check
server s015023_imap 192.168.15.23:143 check
server s015023_https 192.168.15.23:443 check
server s015023_imaps 192.168.15.23:993 check
server s015023_pop3s 192.168.15.23:995 check


Which is then being used by the actual backends like:

backend bk_exchange_https
mode http

option httpchk HEAD /check HTTP/1.0

server s015023 192.168.15.23:443 ssl verify none check addr 
127.0.0.1 port 1001 observe layer4
server s015024 192.168.15.24:443 ssl verify none check addr 
127.0.0.1 port 1002 observe layer4

...


The old cluster is currently being updated and there's a included health 
check available for Exchange which I'd like to include.

So I was thinking about something like:
listen chk_s015023_healthcheck
bind 0.0.0.0:1003
mode http

monitor-uri /check_exchange

tcp-request connection reject if { nbsrv lt 1 } { src LOCALHOST 
}

monitor fail if { nbsrv lt 1 }

default-server inter 3s rise 2 fall 3

option httpchk GET /owa/healthcheck.htm HTTP/1.0

server s015023_health 192.168.15.23:443 check ssl verify none


listen chk_s015023
bind 0.0.0.0:1001
mode http

monitor-uri /check

tcp-request connection reject if { nbsrv lt 7 } { src LOCALHOST 
}

monitor fail if { nbsrv lt 7 }

default-server inter 3s rise 2 fall 3

server s015023_smtp 192.168.15.23:25 check
server s015023_pop3 192.168.15.23:110 check
server s015023_imap 192.168.15.23:143 check
server s015023_https 192.168.15.23:443 check
server s015023_imaps 192.168.15.23:993 check
server s015023_pop3s 192.168.15.23:995 check
server chk_s015023_healthcheck 127.0.0.1:1003 check


The new healthcheck is marked as being down/up as expected, the problem 
is, that the TCP check for that new health check "server 
chk_s015023_healthcheck 127.0.0.1:1003 check" doesn't work.
Even though we have that "tcp-request connection reject if { nbsrv lt 1 
} { src LOCALHOST }" within the new check, it doesn't seem to be enough 
for the TCP check.


Is it somehow possible to combine both checks, to make it recognize the 
new check's status properly?

I'd like to avoid using an external check script to do all those checks.

--
Regards,
Christian Ruppert



Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread Илья Шипицин
btw, side question...

is it common thing in Prometheus world to perform cascade export ? like
"exporter" --> "filter out" --> "aggregate" --> "deliver to prometheus"
in order to keep things simple and not to push everything into single tool

чт, 21 нояб. 2019 г. в 14:49, Christopher Faulet :

> Le 20/11/2019 à 21:23, William Dauchy a écrit :
> > Hi Christopher,
> >
> > On Wed, Nov 20, 2019 at 02:56:28PM +0100, Christopher Faulet wrote:
> >> Nice, Thanks for your feedback. It is merged now. And I'm on the
> backports
> >> for the 2.0.
> >
> > You apparently forgot to backport
> > commit 0d1c2a65e8370a770d01 (MINOR: stats: Report max times in addition
> of the averages for sessions)
> >
> > 2.0 tree does not build anymore because ST_F_QT_MAX is not defined.
> >
>
> Damned ! You're right. I'm sorry. It was backported now. Thanks !
>
> --
> Christopher Faulet
>
>


Re: Combining (kind of) http and tcp checks

2019-11-21 Thread Aleksandar Lazic

Hi.

Am 21.11.2019 um 10:49 schrieb Christian Ruppert:

Hi list,

for an old exchange cluster I have some check listener like:
listen chk_s015023
     bind 0.0.0.0:1001
     mode http

     monitor-uri /check

     tcp-request connection reject if { nbsrv lt 6 } { src LOCALHOST }
     monitor fail if { nbsrv lt 6 }

     default-server inter 3s rise 2 fall 3

     server s015023_smtp 192.168.15.23:25 check
     server s015023_pop3 192.168.15.23:110 check
     server s015023_imap 192.168.15.23:143 check
     server s015023_https 192.168.15.23:443 check
     server s015023_imaps 192.168.15.23:993 check
     server s015023_pop3s 192.168.15.23:995 check


Which is then being used by the actual backends like:

backend bk_exchange_https
     mode http

     option httpchk HEAD /check HTTP/1.0

     server s015023 192.168.15.23:443 ssl verify none check addr 127.0.0.1 
port 1001 observe layer4
     server s015024 192.168.15.24:443 ssl verify none check addr 127.0.0.1 
port 1002 observe layer4

     ...


The old cluster is currently being updated and there's a included health check 
available for Exchange which I'd like to include.

So I was thinking about something like:
listen chk_s015023_healthcheck
     bind 0.0.0.0:1003
     mode http

     monitor-uri /check_exchange

     tcp-request connection reject if { nbsrv lt 1 } { src LOCALHOST }
     monitor fail if { nbsrv lt 1 }

     default-server inter 3s rise 2 fall 3

     option httpchk GET /owa/healthcheck.htm HTTP/1.0

     server s015023_health 192.168.15.23:443 check ssl verify none


listen chk_s015023
     bind 0.0.0.0:1001
     mode http

     monitor-uri /check

     tcp-request connection reject if { nbsrv lt 7 } { src LOCALHOST }
     monitor fail if { nbsrv lt 7 }

     default-server inter 3s rise 2 fall 3

     server s015023_smtp 192.168.15.23:25 check
     server s015023_pop3 192.168.15.23:110 check
     server s015023_imap 192.168.15.23:143 check
     server s015023_https 192.168.15.23:443 check
     server s015023_imaps 192.168.15.23:993 check
     server s015023_pop3s 192.168.15.23:995 check
     server chk_s015023_healthcheck 127.0.0.1:1003 check


The new healthcheck is marked as being down/up as expected, the problem is, that 
the TCP check for that new health check "server chk_s015023_healthcheck 
127.0.0.1:1003 check" doesn't work.
Even though we have that "tcp-request connection reject if { nbsrv lt 1 } { src 
LOCALHOST }" within the new check, it doesn't seem to be enough for the TCP check.


Is it somehow possible to combine both checks, to make it recognize the new 
check's status properly?

I'd like to avoid using an external check script to do all those checks.


Maybe you can use the track feature from haproxy for that topic.
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#5.2-track

I have never used it but it looks exactly what you want.
1 backend for tcp checks and 1 backend for http right?

Regards
Aleks



Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread William Dauchy
On Thu, Nov 21, 2019 at 03:00:02PM +0500, Илья Шипицин wrote:
> is it common thing in Prometheus world to perform cascade export ? like
> "exporter" --> "filter out" --> "aggregate" --> "deliver to prometheus"
> in order to keep things simple and not to push everything into single tool

no, the best practice is often to keep the original data in a local
prometheus, and aggregate (quite often you use another instance for
aggregation)
But here those patch are about dropping the data you know you will never
use.
-- 
William


Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread Илья Шипицин
чт, 21 нояб. 2019 г. в 15:07, William Dauchy :

> On Thu, Nov 21, 2019 at 03:00:02PM +0500, Илья Шипицин wrote:
> > is it common thing in Prometheus world to perform cascade export ? like
> > "exporter" --> "filter out" --> "aggregate" --> "deliver to prometheus"
> > in order to keep things simple and not to push everything into single
> tool
>
> no, the best practice is often to keep the original data in a local
> prometheus, and aggregate (quite often you use another instance for
> aggregation)
> But here those patch are about dropping the data you know you will never
> use.
>

I understand. However, those patches add complexity (which might be moved
to another dedicated tool)


> --
> William
>


Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread William Dauchy
On Thu, Nov 21, 2019 at 03:09:30PM +0500, Илья Шипицин wrote:
> I understand. However, those patches add complexity (which might be moved
> to another dedicated tool)

those patch makes sense for heavy haproxy instances. As you might have
seen above, we are talking about > 130MB of data. So for a full scraping
every 60s or less, this is not realistic. Even the data loading might
take too much time. We had cases where loading the data on exporter side
was taking more time than the frequency of scraping, generating a
snowball effect.
It's a good pratice to avoid exporting data you know
you won't use instead of: scraping -> deleting -> aggregate
In our case we do not use most of the server metrics. It represents a
factor of 10 in terms of exported data.

-- 
William


Re: Combining (kind of) http and tcp checks

2019-11-21 Thread Christian Ruppert

Hi Aleks,

On 2019-11-21 11:01, Aleksandar Lazic wrote:

Hi.

Am 21.11.2019 um 10:49 schrieb Christian Ruppert:

Hi list,

for an old exchange cluster I have some check listener like:
listen chk_s015023
     bind 0.0.0.0:1001
     mode http

     monitor-uri /check

     tcp-request connection reject if { nbsrv lt 6 } { src 
LOCALHOST }

     monitor fail if { nbsrv lt 6 }

     default-server inter 3s rise 2 fall 3

     server s015023_smtp 192.168.15.23:25 check
     server s015023_pop3 192.168.15.23:110 check
     server s015023_imap 192.168.15.23:143 check
     server s015023_https 192.168.15.23:443 check
     server s015023_imaps 192.168.15.23:993 check
     server s015023_pop3s 192.168.15.23:995 check


Which is then being used by the actual backends like:

backend bk_exchange_https
     mode http

     option httpchk HEAD /check HTTP/1.0

     server s015023 192.168.15.23:443 ssl verify none check addr 
127.0.0.1 port 1001 observe layer4
     server s015024 192.168.15.24:443 ssl verify none check addr 
127.0.0.1 port 1002 observe layer4

     ...


The old cluster is currently being updated and there's a included 
health check available for Exchange which I'd like to include.

So I was thinking about something like:
listen chk_s015023_healthcheck
     bind 0.0.0.0:1003
     mode http

     monitor-uri /check_exchange

     tcp-request connection reject if { nbsrv lt 1 } { src 
LOCALHOST }

     monitor fail if { nbsrv lt 1 }

     default-server inter 3s rise 2 fall 3

     option httpchk GET /owa/healthcheck.htm HTTP/1.0

     server s015023_health 192.168.15.23:443 check ssl verify none


listen chk_s015023
     bind 0.0.0.0:1001
     mode http

     monitor-uri /check

     tcp-request connection reject if { nbsrv lt 7 } { src 
LOCALHOST }

     monitor fail if { nbsrv lt 7 }

     default-server inter 3s rise 2 fall 3

     server s015023_smtp 192.168.15.23:25 check
     server s015023_pop3 192.168.15.23:110 check
     server s015023_imap 192.168.15.23:143 check
     server s015023_https 192.168.15.23:443 check
     server s015023_imaps 192.168.15.23:993 check
     server s015023_pop3s 192.168.15.23:995 check
     server chk_s015023_healthcheck 127.0.0.1:1003 check


The new healthcheck is marked as being down/up as expected, the 
problem is, that the TCP check for that new health check "server 
chk_s015023_healthcheck 127.0.0.1:1003 check" doesn't work.
Even though we have that "tcp-request connection reject if { nbsrv lt 
1 } { src LOCALHOST }" within the new check, it doesn't seem to be 
enough for the TCP check.


Is it somehow possible to combine both checks, to make it recognize 
the new check's status properly?
I'd like to avoid using an external check script to do all those 
checks.


Maybe you can use the track feature from haproxy for that topic.
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#5.2-track

I have never used it but it looks exactly what you want.
1 backend for tcp checks and 1 backend for http right?

Regards
Aleks



Thanks! That seems to do the trick:
listen chk_s015023_healthcheck
bind 0.0.0.0:1003
mode http

monitor-uri /check_exchange

tcp-request connection reject if { nbsrv lt 1 } { src LOCALHOST 
}

monitor fail if { nbsrv lt 1 }

default-server inter 3s rise 2 fall 3

option httpchk GET /owa/healthcheck.htm HTTP/1.0

server s015023_health 192.168.15.23:443 check ssl verify none

listen chk_s015023
bind 0.0.0.0:1001
mode http

monitor-uri /check

tcp-request connection reject if { nbsrv lt 6 } { src LOCALHOST 
}

monitor fail if { nbsrv lt 6 }

default-server inter 3s rise 2 fall 3

server s015023_smtp 192.168.15.23:25 check
server s015023_pop3 192.168.15.23:110 check
server s015023_imap 192.168.15.23:143 check
server s015023_https 192.168.15.23:443 track 
chk_s015023_healthcheck/s015023_health

server s015023_imaps 192.168.15.23:993 check
server s015023_pop3s 192.168.15.23:995 check

--
Regards,
Christian Ruppert



Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread Илья Шипицин
чт, 21 нояб. 2019 г. в 15:18, William Dauchy :

> On Thu, Nov 21, 2019 at 03:09:30PM +0500, Илья Шипицин wrote:
> > I understand. However, those patches add complexity (which might be moved
> > to another dedicated tool)
>
> those patch makes sense for heavy haproxy instances. As you might have
> seen above, we are talking about > 130MB of data. So for a full scraping
> every 60s or less, this is not realistic. Even the data loading might
> take too much time. We had cases where loading the data on exporter side
> was taking more time than the frequency of scraping, generating a
> snowball effect.
> It's a good pratice to avoid exporting data you know
> you won't use instead of: scraping -> deleting -> aggregate
> In our case we do not use most of the server metrics. It represents a
> factor of 10 in terms of exported data.
>

yep. I did see 130mb and I was impressed.


>
> --
> William
>


Re: Combining (kind of) http and tcp checks

2019-11-21 Thread Aleksandar Lazic

Am 21.11.2019 um 11:23 schrieb Christian Ruppert:

Hi Aleks,

On 2019-11-21 11:01, Aleksandar Lazic wrote:

Hi.

Am 21.11.2019 um 10:49 schrieb Christian Ruppert:

Hi list,

for an old exchange cluster I have some check listener like:
listen chk_s015023


[snipp]

The new healthcheck is marked as being down/up as expected, the problem is, 
that the TCP check for that new health check "server chk_s015023_healthcheck 
127.0.0.1:1003 check" doesn't work.
Even though we have that "tcp-request connection reject if { nbsrv lt 1 } { 
src LOCALHOST }" within the new check, it doesn't seem to be enough for the 
TCP check.


Is it somehow possible to combine both checks, to make it recognize the new 
check's status properly?

I'd like to avoid using an external check script to do all those checks.


Maybe you can use the track feature from haproxy for that topic.
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#5.2-track

I have never used it but it looks exactly what you want.
1 backend for tcp checks and 1 backend for http right?

Regards
Aleks



Thanks! That seems to do the trick:
listen chk_s015023_healthcheck
     bind 0.0.0.0:1003
     mode http

     monitor-uri /check_exchange

     tcp-request connection reject if { nbsrv lt 1 } { src LOCALHOST }
     monitor fail if { nbsrv lt 1 }

     default-server inter 3s rise 2 fall 3

     option httpchk GET /owa/healthcheck.htm HTTP/1.0

     server s015023_health 192.168.15.23:443 check ssl verify none

listen chk_s015023
     bind 0.0.0.0:1001
     mode http

     monitor-uri /check

     tcp-request connection reject if { nbsrv lt 6 } { src LOCALHOST }
     monitor fail if { nbsrv lt 6 }

     default-server inter 3s rise 2 fall 3

     server s015023_smtp 192.168.15.23:25 check
     server s015023_pop3 192.168.15.23:110 check
     server s015023_imap 192.168.15.23:143 check
     server s015023_https 192.168.15.23:443 track 
chk_s015023_healthcheck/s015023_health

     server s015023_imaps 192.168.15.23:993 check
     server s015023_pop3s 192.168.15.23:995 check



Yes, HAProxy is so amazing ;-))



Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"

2019-11-21 Thread Baptiste
Hi there,

Since a short term reliable solution can't be found, we can apply this
patch as a workaround.

Baptiste

>


Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-21 Thread William Dauchy
Hi Christopher,

On Tue, Nov 19, 2019 at 04:35:47PM +0100, Christopher Faulet wrote:
> +/* Parse the query stirng of request URI to filter the metrics. It returns 1 
> on
> + * success and -1 on error. */
> +static int promex_parse_uri(struct appctx *appctx, struct stream_interface 
> *si)
> +{
> + struct channel *req = si_oc(si);
> + struct channel *res = si_ic(si);
> + struct htx *req_htx, *res_htx;
> + struct htx_sl *sl;
> + const char *p, *end;
> + struct buffer *err;
> + int default_scopes = PROMEX_FL_SCOPE_ALL;
> + int len;
> +
> + /* Get the query-string */
> + req_htx = htxbuf(&req->buf);
> + sl = http_get_stline(req_htx);
> + if (!sl)
> + goto error;
> + p = http_find_param_list(HTX_SL_REQ_UPTR(sl), HTX_SL_REQ_ULEN(sl), '?');
> + if (!p)
> + goto end;

It's my turn to be sorry. I wrongly tested on my side regarding a real
integration with prometheus, because I mixed the old metrics and the new
ones. Indeed, prometheus is trying to scrape the encoded url such as:

metrics%3Fscope=global&scope=frontend&scope=backend
instead of
metrics?scope=global&scope=frontend&scope=backend

Do you think it could be acceptable to send a patch adding a function
such as:
static inline char *http_find_encoded_param_list(char *path, size_t path_l, 
char* delim);

and test the encoded search if we don't find '?'
Or is there a way to convert the url first?

I'm ok to handle the patch if you validate the solution.

Thanks,
-- 
William



Re: master-worker no-exit-on-failure with SO_REUSEPORT and a port being already in use

2019-11-21 Thread Christian Ruppert

On 2019-11-20 11:05, William Lallemand wrote:

On Wed, Nov 20, 2019 at 10:19:20AM +0100, Christian Ruppert wrote:

Hi William,

thanks for the patch. I'll test it later today.  What I actually 
wanted to
achieve is: 
https://cbonte.github.io/haproxy-dconv/2.0/management.html#4 Then
HAProxy tries to bind to all listening ports. If some fatal errors 
happen
(eg: address not present on the system, permission denied), the 
process quits
with an error. If a socket binding fails because a port is already in 
use,
then the process will first send a SIGTTOU signal to all the pids 
specified
in the "-st" or "-sf" pid list. This is what is called the "pause" 
signal. It
instructs all existing haproxy processes to temporarily stop listening 
to
their ports so that the new process can try to bind again. During this 
time,
the old process continues to process existing connections. If the 
binding
still fails (because for example a port is shared with another 
daemon), then
the new process sends a SIGTTIN signal to the old processes to 
instruct them
to resume operations just as if nothing happened. The old processes 
will then
restart listening to the ports and continue to accept connections. Not 
that

this mechanism is system

In my test case though it failed to do so.


Well, it only works with HAProxy processes, not with other processes. 
There is
no mechanism to ask a process which is neither an haproxy process nor a 
process

which use SO_REUSEPORT.

With HAProxy processes it will bind with SO_REUSEPORT, and will only 
use the

SIGTTOU/SIGTTIN signals if it fails to do so.

This part of the documentation is for HAProxy without master-worker 
mode
in master-worker mode, once the master is launched successfully it is 
never

supposed to quit upon a reload (kill -USR2).

During a reload in master-worker mode, the master will do a -sf .
If the reload failed for any reason (bad configuration, unable to bind 
etc.),
the behavior is to keep the previous workers. It only tries to kill the 
workers

if the reload succeed. So this is the default behavior.


Your patch seems to fix the issue. The master process won't exit 
anymore. Fallback seems to work during my initial tests. Thanks!


--
Regards,
Christian Ruppert



Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"

2019-11-21 Thread Willy Tarreau
On Thu, Nov 21, 2019 at 02:12:09PM +0100, Baptiste wrote:
> Hi there,
> 
> Since a short term reliable solution can't be found, we can apply this
> patch as a workaround.

Yep, as discussed on the github issue I think it remains the most
reasonable short-term approach.

Thanks,
Willy



Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"

2019-11-21 Thread Willy Tarreau
On Thu, Nov 21, 2019 at 05:18:58PM +0100, Willy Tarreau wrote:
> On Thu, Nov 21, 2019 at 02:12:09PM +0100, Baptiste wrote:
> > Hi there,
> > 
> > Since a short term reliable solution can't be found, we can apply this
> > patch as a workaround.
> 
> Yep, as discussed on the github issue I think it remains the most
> reasonable short-term approach.

Patch now merged, thanks guys.

Willy



HTX no connection close - 2.0.9

2019-11-21 Thread Valters Jansons
Hello everyone,

I am running HAProxy v2.0.9 on Ubuntu using the dedicated PPA 
(ppa:vbernat/haproxy-2.0). There seems to be a behavior change for a specific 
endpoint between HTX enabled and HTX disabled, but I have not been able to 
pin-point the exact root cause.

With HTX disabled (`no option http-use-htx`), a browser makes a POST request 
(ALPN H2) which is shown as HTTP/1.1. That then reaches the backend (IIS) as 
HTTP/1.1 and finishes successfully in around 10 seconds.

With the default behavior of HTX enabled, the POST request comes in and is 
shown as HTTP/2.0. It then connects to backend as HTTP/1.1 and the client 
receives a 200 OK and the response data around the same time as without HTX. 
However, the connection does not get properly closed until server timeout with 
a termination_state of sD-- (server-side timeout in the DATA phase). At that 
point, debug log shows `srvcls` and the client connection is 'successfully' 
closed. The backend itself seems to think it handled the request 'as usual'.

The non-HTX debug log does not show srvcls, clicls and closed events on the 
backend whatsoever, but seeing as that connection does terminate, I am guessing 
the relevant events just don't get logged with HTX disabled.

We are using http-keep-alive as the default connection mode, but changing it to 
http-server-close or httpclose does not seem to make a difference.

The strange part here is that we are seeing this particular behavior with HTX 
enabled only on browsers (tested Chrome and Firefox on multiple machines), as 
testing using cURL (H2) or simply via OpenSSL's s_client (HTTP/1.1) appears to 
work even when HTX is enabled, and additionally, we are seeing this on the 
particular endpoint only for a specific user's context. That could also imply 
that it has something to do with the response data, or maybe it could just be a 
red herring. Maybe HTX is waiting on some trailing headers or some other 
feature of HTTP..

Any ideas as to where I should start troubleshooting HTX behavior for one 
production endpoint for one specific user context?

Best regards,
Valters Jansons


Cache based on HAProxy

2019-11-21 Thread Aleksandar Lazic


Hi.
 
Have anyone seen this project?
https://github.com/jiangwenyuan/nuster
 
It's a high-performance HTTP proxy cache server and RESTful NoSQL cache server 
based on HAProxy.
 
The HAProxy version in use is 1.9.
 
Regards
Alex