stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-12 Thread PiBa-NL

Hi List,

I've configured haproxy with htx and when i try to filter the stats webpage.
Sending this request: "GET /?;csv;scope=b1" to '2.0-dev0-762475e 
2019/01/10' it will crash with the trace below.

1.9.0 and 1.9.1 are also affected.

Can someone take a look? Thanks in advance.

A regtest is attached that reproduces the behavior, and which i think 
could be included into the haproxy repository.


Regards,
PiBa-NL (Pieter)

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00564fe7 in strnistr (str1=0x802631048 "fe1", len_str1=3, 
str2=0x804e3bf4c , 
len_str2=3)

    at src/standard.c:3657
3657    while (toupper(*start) != toupper(*str2)) {
(gdb) bt full
#0  0x00564fe7 in strnistr (str1=0x802631048 "fe1", len_str1=3, 
str2=0x804e3bf4c , 
len_str2=3)

    at src/standard.c:3657
    pptr = 0x804e3bf4c 0x804e3bf4c>

    sptr = 0x80271df80 "\330?"
    start = 0x802631048 "fe1"
    slen = 3
    plen = 3
    tmp1 = 0
    tmp2 = 4294959392
#1  0x004d01d3 in stats_dump_proxy_to_buffer (si=0x8026416d8, 
htx=0x8027c8e40, px=0x8026b3c00, uri=0x802638000) at src/stats.c:2079

    appctx = 0x802678380
    s = 0x802641400
    rep = 0x802641470
    sv = 0x8027c8e40
    svs = 0x33be1e0
    l = 0x4d31df 
    flags = 0
#2  0x004d4139 in stats_dump_stat_to_buffer (si=0x8026416d8, 
htx=0x8027c8e40, uri=0x802638000) at src/stats.c:2652

    appctx = 0x802678380
    rep = 0x802641470
    px = 0x8026b3c00
#3  0x004d56bb in htx_stats_io_handler (appctx=0x802678380) at 
src/stats.c:3299

    si = 0x8026416d8
    s = 0x802641400
    req = 0x802641410
    res = 0x802641470
    req_htx = 0x8027c8e40
    res_htx = 0x8027c8e40
#4  0x004d2546 in http_stats_io_handler (appctx=0x802678380) at 
src/stats.c:3367

    si = 0x8026416d8
    s = 0x802641400
    req = 0x802641410
    res = 0x802641470
#5  0x005f729f in task_run_applet (t=0x8026566e0, 
context=0x802678380, state=16385) at src/applet.c:85

    app = 0x802678380
    si = 0x8026416d8
#6  0x005f2533 in process_runnable_tasks () at src/task.c:435
    t = 0x8026566e0
    state = 16385
    ctx = 0x802678380
    process = 0x5f7200 
    t = 0x8026566e0
    max_processed = 199
#7  0x005163b2 in run_poll_loop () at src/haproxy.c:2619
    next = 0
    exp = 1137019023
#8  0x00513008 in run_thread_poll_loop (data=0x8026310f0) at 
src/haproxy.c:2684

    start_lock = 0
    ptif = 0x935d40 
    ptdf = 0x0
#9  0x0050f636 in main (argc=4, argv=0x7fffeb08) at 
src/haproxy.c:3313

    tids = 0x8026310f0
    threads = 0x8026310f8
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 4052, rlim_max = 4052}
    errmsg = 
"\000\353\377\377\377\177\000\000\060\353\377\377\377\177\000\000\b\353\377\377\377\177\000\000\004\000\000\000\000\000\000\000t\240\220?\260|6\224`)\224\000\000\000\000\000\000\353\377\377\377\177\000\000\060\353\377\377\377\177\000\000\b\353\377\377\377\177\000\000\004\000\000\000\000\000\000\000\240\352\377\377\377\177\000\000R\201\000\002\b\000\000\000\001\000\000"

    pidfd = -1

From 838ecb4e153c1d859d0a49e0554ff050ff10033c Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sat, 12 Jan 2019 21:57:48 +0100
Subject: [PATCH] REGTEST: checks basic stats webpage functionality

This regtest verifies that the stats webpage can be used to change a
server state to maintenance or drain, and that filtering the page scope
will result in a filtered page.
---
 .../h_webstats-scope-and-post-change.vtc  | 83 +++
 1 file changed, 83 insertions(+)
 create mode 100644 reg-tests/webstats/h_webstats-scope-and-post-change.vtc

diff --git a/reg-tests/webstats/h_webstats-scope-and-post-change.vtc 
b/reg-tests/webstats/h_webstats-scope-and-post-change.vtc
new file mode 100644
index ..a77483b5
--- /dev/null
+++ b/reg-tests/webstats/h_webstats-scope-and-post-change.vtc
@@ -0,0 +1,83 @@
+varnishtest "Webgui stats page check filtering with scope and changing server 
state"
+#REQUIRE_VERSION=1.6
+
+feature ignore_unknown_macro
+
+server s1 {
+} -start
+
+haproxy h1 -conf {
+  global
+stats socket /tmp/haproxy.socket level admin
+
+  defaults
+mode http
+${no-htx} option http-use-htx
+
+  frontend fe1
+bind "fd@${fe1}"
+stats enable
+stats refresh 5s
+stats uri /
+stats admin if TRUE
+
+  backend b1
+server srv1 ${s1_addr}:${s1_port}
+server srv2 ${s1_addr}:${s1_port}
+server srv3 ${s1_addr}:${s1_port}
+
+  backend b2
+server srv1 ${s1_addr}:${s1_port}
+server srv2 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_fe1_sock} {
+txreq -url "/;csv;"
+rxresp
+expect 

Re: Lots of mail from email alert on 1.9.x

2019-01-12 Thread Willy Tarreau
Hi Pieter,

On Sat, Jan 12, 2019 at 09:30:39PM +0100, PiBa-NL wrote:
> Hi Willy, Olivier,
> 
> Op 12-1-2019 om 13:11 schreef Willy Tarreau:
> > Hi Pieter,
> > 
> > it is needed to prepend this at the beginning of chk_report_conn_err() :
> > 
> > if (!check->server)
> > return;
> > 
> > We need to make sure that check->server is properly tested everywhere.
> > With a bit of luck this one was the only remnant.
> > 
> > Thanks!
> > Willy
> 
> With the check above added, mail alerts seem to work properly here, or just
> as good as they used to anyhow.

Great, thank you!

> Once the patches and above addition get committed, that leaves the other
> 'low priority' issue of needing a short timeout to send the exact amount of
> 'expected' mails.
>     EXPECT resp.http.mailsreceived (10) == "16" failed
> To be honest i only noticed it due to making the regtest, and
> double-checking what to expect.. When i validated mails on my actual
> environment it seems to work properly. (Though the server i took out to test
> has a health-check with a 60 second interval..) Anyhow its been like this
> for years afaik, i guess it wont matter much if stays like this a bit
> longer.

I think it'll indeed stay for a while given that I'd like to focus a
bit more on collecting the remaining important stuff for 1.9.2.

Cheers,
Willy



Re: Lots of mail from email alert on 1.9.x

2019-01-12 Thread PiBa-NL

Hi Willy, Olivier,

Op 12-1-2019 om 13:11 schreef Willy Tarreau:

Hi Pieter,

it is needed to prepend this at the beginning of chk_report_conn_err() :

if (!check->server)
return;

We need to make sure that check->server is properly tested everywhere.
With a bit of luck this one was the only remnant.

Thanks!
Willy


With the check above added, mail alerts seem to work properly here, or 
just as good as they used to anyhow.


Once the patches and above addition get committed, that leaves the other 
'low priority' issue of needing a short timeout to send the exact amount 
of 'expected' mails.

    EXPECT resp.http.mailsreceived (10) == "16" failed
To be honest i only noticed it due to making the regtest, and 
double-checking what to expect.. When i validated mails on my actual 
environment it seems to work properly. (Though the server i took out to 
test has a health-check with a 60 second interval..) Anyhow its been 
like this for years afaik, i guess it wont matter much if stays like 
this a bit longer.


Regards,
PiBa-NL (Pieter)




Re: haproxy issue tracker discussion

2019-01-12 Thread Willy Tarreau
On Sat, Jan 12, 2019 at 01:09:08PM +0100, Tim Düsterhus wrote:
> Willy,
> 
> Am 12.01.19 um 12:45 schrieb Willy Tarreau:
> >> This example makes me wonder, though: Should the various branches be
> >> separate repositories on GitHub like on haproxy.org or should they be
> >> actual branches (e.g. 1.8.x in haproxy/haproxy instead of
> >> haproxy/haproxy-1.8) for the mirror?
> > 
> > I've been thinking about this for a while in the past, which is the
> > reason why we purposely didn't upload the other branches yet. On the
> > one hand having all branches at once in a single repository is
> > convenient to retrieve everything at once. On the other hand, most
> > people cloning the repo are not interested in retrieving maintenance
> > versions for old branches. In addition, while at the moment and for
> > simplicity reasons, all stable maintainers do have write access to
> > the master repository, in the future we could imagine delegating some
> > outdated branches to certain contributors who still need to maintain
> > them for whatever reason, and in this case we might not feel much at
> > ease with granting them a write access to all other repos and have to
> > face their occasional mistakes.
> 
> The situation on GitHub does not need to mirror the situation on
> haproxy.org. You could still use separated repositories on haproxy.org
> to separate permissions and push the "validated" commits to GitHub. This
> is what the PHP project does. The canonical repositories lives on their
> infrastructure and GitHub is a mirror only. They have a pre-receive hook
> that prevents pushes if files / branches outside one's permission set
> are touched: https://github.com/php/karma/blob/master/hooks/pre-receive
> 
> The post-receive hook would then do the following for dev:
> 
> git push github master:master
> git push github --tags
> 
> And this for 1.8:
> 
> git push github master:1.8.x
> git push github --tags

Well, maybe indeed. However if a new maintainer comes and wants to follow
a different process he will need the permissions to push to GH. But you're
making a point that anyway GH is only a mirror so he'll have to have access
to haproxy.org and publish the reference there, so that might work anyway.
As you see, I'm mentioning use cases, though I'll probably not be the one
having the last word on the implementation given that I'll rely on others'
help.

> Just mentioning the issue does not close it! You have to prefix the
> number with a "(Fix(es)?|Closes?)".
> Mentioning the issue comes with the nice property that the commit
> appears in the timeline of the issue, which I consider useful.

Got it, I didn't think about this, makes sense indeed. I agree that
given that we'll have unique ID numbers across all branches, it would
be sad not to be able to easily reference them.

> > Mind you that I don't have any idea what language this code block uses nor
> > where it has to be stuffed, so I'll trust you :-)
> 
> It's the Graph Query Language the v4 API of GitHub uses:
> https://developer.github.com/v4/. You send it as the body of the HTTP
> API request and get back a JSON formatted reply with the data you asked
> for. And if it's non-empty you send a new API request that opens /
> closes the issue in question.
> Where it needs to be stuffed: On some server that runs a cronjob /
> ingests the webhooks. :-)

Thanks for the explanation.

> In the long run it can probably be stuffed as a GitHub Action and be
> part of the repository (in the .github folder). They are still a Beta,
> though: https://developer.github.com/actions/

Thanks for the link, I wasn't aware of this.

> > OK, that's manageable I guess, thanks. I find it a bit annoying that it uses
> > the code repository to store the github interface configuration but aside
> > this it's probably OK.
> 
> At least you can use the .github/ hidden folder now. In former times
> they had to be placed at the root of the repository.

Ah so we're seeing the improved version :-)

>  status: pending-backport
> >>>
> >>> I think this one is implied by the presence of "affects:"
> >>
> >> Not necessarily. "affects" without "pending-backport" probably needs
> >> more work finding the issue first, while "pending-backport" can be as
> >> easy as a cherry-pick.
> > 
> > OK I see your point but in this case the dev branch is fixed,
> > "affects: dev" is not there anymore then pending-backport is implied
> > by ("affects: 1.x" and not "affects: dev"). This even works for bugs
> > that only affect older branches. 
> 
> Does it? What if a bug is found that only affects the current stable
> branch, but not dev, because some refactoring "accidentally" fixed it?

In practice we *always* first try to diagnose the issue in -dev. The
only case where dev is not affected but an older branch is affected is
when we have diagnosed it enough to know that dev is really not affected.

However I'm seeing the problem with -dev : we need to know that a fix
is known. It's the "status: resolved" that 

Re: Haproxy using 100% CPU - mostly epoll call

2019-01-12 Thread Willy Tarreau
Hello Basha,

On Wed, Jan 09, 2019 at 01:16:18PM +, Basha Mougamadou wrote:
> Hello,
> 
> Using haproxy 1.8.16-5c3f237, for unknown reason, one of the process uses all 
> the available cores.
> 
>PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
>  25120 haproxy   20   0  813808 212372   7268 R 799.3  0.3   1222:23 haproxy
>  26653 haproxy   20   0  814916 214880   6044 S   1.7  0.3   2:12.09 haproxy
> 
> As of now, the process is still running, haven't killed it for the sake of 
> debugging the issue.
> 
> With the debug info below, do you have some ideas on why/how this occurs?

It's been a while since we've last had such issues. If the process still
works, could you please issue the following commands to the CLI ?
  "show activity"
  "show fd"
  "show info"
  "show sess all"

Please note that this last one will reveal some possibly sensitive
information like frontend/backend/server names or clients IP addresses,
so you'll probably prefer to check/edit the contents before posting.

Thanks,
Willy



Re: HAProxy compilation issue

2019-01-12 Thread Willy Tarreau
Hi Olivier,

On Wed, Jan 09, 2019 at 07:23:42PM +0100, Olivier D wrote:
> Hello folks,
> 
> Just wanted to raise an issue with a compilation error on HAProxy that I
> was able to solve by myself. Just wanted to know if this issue is
> haproxy-related or compiler-related (and if a fix should be provided in the
> future)
> 
> Compiling haproxy (1.8.17) failed with this error :
(...)
> /usr/src/haproxy/opensslbin/lib/libcrypto.a(threads_pthread.o): In function
> `fork_once_func':
> threads_pthread.c:(.text+0x16): undefined reference to `pthread_atfork'
> collect2: error: ld returned 1 exit status

It's been a while since we've got such linking issues. Usually they come
from libpthread or libdl, which are most always shared. Can you please
try the attached patch ?

Thanks,
Willy
diff --git a/Makefile b/Makefile
index 94e0473..24bd87a 100644
--- a/Makefile
+++ b/Makefile
@@ -574,17 +574,6 @@ OPTIONS_CFLAGS += -DCONFIG_REGPARM=3
 BUILD_OPTIONS  += $(call ignore_implicit,USE_REGPARM)
 endif
 
-ifneq ($(USE_DL),)
-BUILD_OPTIONS   += $(call ignore_implicit,USE_DL)
-OPTIONS_LDFLAGS += -ldl
-endif
-
-ifneq ($(USE_THREAD),)
-BUILD_OPTIONS   += $(call ignore_implicit,USE_THREAD)
-OPTIONS_CFLAGS  += -DUSE_THREAD
-OPTIONS_LDFLAGS += -lpthread
-endif
-
 # report DLMALLOC_SRC only if explicitly specified
 ifneq ($(DLMALLOC_SRC),)
 BUILD_OPTIONS += DLMALLOC_SRC=$(DLMALLOC_SRC)
@@ -618,20 +607,6 @@ endif
 OPTIONS_OBJS  += src/ssl_sock.o
 endif
 
-# The private cache option affect the way the shctx is built
-ifneq ($(USE_PRIVATE_CACHE),)
-OPTIONS_CFLAGS  += -DUSE_PRIVATE_CACHE
-else
-ifneq ($(USE_PTHREAD_PSHARED),)
-OPTIONS_CFLAGS  += -DUSE_PTHREAD_PSHARED
-OPTIONS_LDFLAGS += -lpthread
-else
-ifneq ($(USE_FUTEX),)
-OPTIONS_CFLAGS  += -DUSE_SYSCALL_FUTEX
-endif
-endif
-endif
-
 ifneq ($(USE_LUA),)
 check_lua_lib = $(shell echo "int main(){}" | $(CC) -o /dev/null -x c - $(2) 
-l$(1) 2>/dev/null && echo $(1))
 check_lua_inc = $(shell if [ -d $(2)$(1) ]; then echo $(2)$(1); fi;)
@@ -838,6 +813,33 @@ OPTIONS_CFLAGS += -DCONFIG_HAP_NS
 BUILD_OPTIONS  += $(call ignore_implicit,USE_NS)
 endif
 
+# -ldl and -lpthread should appear last otherwise some systems may face
+# some build issues.
+ifneq ($(USE_DL),)
+BUILD_OPTIONS   += $(call ignore_implicit,USE_DL)
+OPTIONS_LDFLAGS += -ldl
+endif
+
+ifneq ($(USE_THREAD),)
+BUILD_OPTIONS   += $(call ignore_implicit,USE_THREAD)
+OPTIONS_CFLAGS  += -DUSE_THREAD
+OPTIONS_LDFLAGS += -lpthread
+endif
+
+# The private cache option affect the way the shctx is built
+ifneq ($(USE_PRIVATE_CACHE),)
+OPTIONS_CFLAGS  += -DUSE_PRIVATE_CACHE
+else
+ifneq ($(USE_PTHREAD_PSHARED),)
+OPTIONS_CFLAGS  += -DUSE_PTHREAD_PSHARED
+OPTIONS_LDFLAGS += -lpthread
+else
+ifneq ($(USE_FUTEX),)
+OPTIONS_CFLAGS  += -DUSE_SYSCALL_FUTEX
+endif
+endif
+endif
+
  Global link options
 # These options are added at the end of the "ld" command line. Use LDFLAGS to
 # add options at the beginning of the "ld" command line if needed.


Re: Lots of mail from email alert on 1.9.x

2019-01-12 Thread Willy Tarreau
Hi Pieter,

On Sat, Jan 12, 2019 at 01:00:33AM +0100, PiBa-NL wrote:
> Thanks for this 'change in behavior' ;). Indeed the mailbomb is fixed, and
> it seems the expected mails get generated and delivered, but a segfault also
> happens on occasion. Not with the regtest as it was, but with a few minor
> modifications (adding a unreachable mailserver, and giving it a little more
> time seems to be the most reliable reproduction a.t.m.) it will crash
> consistently after 11 seconds.. So i guess the patch needs a bit more
> tweaking.
> 
> Regards,
> PiBa-NL (Pieter)
> 
> Core was generated by `haproxy -d -f /tmp/vtc.37274.4b8a1a3a/h1/cfg'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x00500955 in chk_report_conn_err (check=0x802616a10,
> errno_bck=0, expired=1) at src/checks.c:689
> 689 dns_trigger_resolution(check->server->dns_requester);
> (gdb) bt full
> #0  0x00500955 in chk_report_conn_err (check=0x802616a10,
> errno_bck=0, expired=1) at src/checks.c:689

Indeed, this function should not have any special effect in this case,
it is needed to prepend this at the beginning of chk_report_conn_err() :

if (!check->server)
return;

We need to make sure that check->server is properly tested everywhere.
With a bit of luck this one was the only remnant.

Thanks!
Willy



Re: haproxy issue tracker discussion

2019-01-12 Thread Tim Düsterhus
Willy,

Am 12.01.19 um 12:45 schrieb Willy Tarreau:
>> This example makes me wonder, though: Should the various branches be
>> separate repositories on GitHub like on haproxy.org or should they be
>> actual branches (e.g. 1.8.x in haproxy/haproxy instead of
>> haproxy/haproxy-1.8) for the mirror?
> 
> I've been thinking about this for a while in the past, which is the
> reason why we purposely didn't upload the other branches yet. On the
> one hand having all branches at once in a single repository is
> convenient to retrieve everything at once. On the other hand, most
> people cloning the repo are not interested in retrieving maintenance
> versions for old branches. In addition, while at the moment and for
> simplicity reasons, all stable maintainers do have write access to
> the master repository, in the future we could imagine delegating some
> outdated branches to certain contributors who still need to maintain
> them for whatever reason, and in this case we might not feel much at
> ease with granting them a write access to all other repos and have to
> face their occasional mistakes.

The situation on GitHub does not need to mirror the situation on
haproxy.org. You could still use separated repositories on haproxy.org
to separate permissions and push the "validated" commits to GitHub. This
is what the PHP project does. The canonical repositories lives on their
infrastructure and GitHub is a mirror only. They have a pre-receive hook
that prevents pushes if files / branches outside one's permission set
are touched: https://github.com/php/karma/blob/master/hooks/pre-receive

The post-receive hook would then do the following for dev:

git push github master:master
git push github --tags

And this for 1.8:

git push github master:1.8.x
git push github --tags

> This matches perfectly the situation I've been into with the kernel:
> I inherited old ones and other developers didn't have to be scared by
> my mistakes because I could only affect the part I was responsible for.
> As such it makes me think that it's probably better to only have the
> master branch in the main repo, and create separate repos for the
> maintenance branches.
> 
>> The former would require referencing issues as haproxy/haproxy#123. The
>> latter allows for a more simple #123.
> 
> Given that we'd prefer *not* to see them automatically closed, I don't
> see this as a limitation, on the opposite!

Just mentioning the issue does not close it! You have to prefix the
number with a "(Fix(es)?|Closes?)".
Mentioning the issue comes with the nice property that the commit
appears in the timeline of the issue, which I consider useful.

>> The following GitHub v4 API query pulls issues with
>> state == OPEN && (label == "1.6-affected" || label == "enhancement")
>> for Lukas' test repository from the API. "enhancement" is to be replaced
>> by the other "*-affected" labels.
>>
>> {
>>   repository(owner: "lukastribus", name: "hap-issue-trial") {
>> issues(states: [OPEN], labels: ["1.6-affected", "enhancement"],
>> last: 100) {
>>   nodes {
>> title
>>   }
>> }
>>   }
>> }
> 
> Mind you that I don't have any idea what language this code block uses nor
> where it has to be stuffed, so I'll trust you :-)

It's the Graph Query Language the v4 API of GitHub uses:
https://developer.github.com/v4/. You send it as the body of the HTTP
API request and get back a JSON formatted reply with the data you asked
for. And if it's non-empty you send a new API request that opens /
closes the issue in question.
Where it needs to be stuffed: On some server that runs a cronjob /
ingests the webhooks. :-)

In the long run it can probably be stuffed as a GitHub Action and be
part of the repository (in the .github folder). They are still a Beta,
though: https://developer.github.com/actions/

 - and issue and feature request template
>>>
>>> For this one I have no idea since I don't know much how it works.
>>
>> You need to a markdown file to the .github/ISSUE_TEMPLATE folder in the
>> repository:
>>
>> https://github.com/lukastribus/hap-issue-trial/blob/master/.github/ISSUE_TEMPLATE/Bug.md
>>
>> see:
>> https://help.github.com/articles/manually-creating-a-single-issue-template-for-your-repository/
> 
> OK, that's manageable I guess, thanks. I find it a bit annoying that it uses
> the code repository to store the github interface configuration but aside
> this it's probably OK.

At least you can use the .github/ hidden folder now. In former times
they had to be placed at the root of the repository.

 status: pending-backport
>>>
>>> I think this one is implied by the presence of "affects:"
>>
>> Not necessarily. "affects" without "pending-backport" probably needs
>> more work finding the issue first, while "pending-backport" can be as
>> easy as a cherry-pick.
> 
> OK I see your point but in this case the dev branch is fixed,
> "affects: dev" is not there anymore then pending-backport is implied
> by ("affects: 1.x" and not "affects: 

Re: Keep client side open on FIN till backend responds

2019-01-12 Thread Willy Tarreau
Hi,

On Fri, Jan 11, 2019 at 01:05:15PM +0200, Assen Totin wrote:
> Hi, everybody!
> 
> I'm facing an issue with a somewhat weird/broken client and I'm looking for
> some advise on it.
> 
> The client opens a HTTPS session and sends its request (POST data in my
> case). The HTTPS part is fine. The POST data is small, so it fits into a
> single TCP packet, which arrives with ACK,PSH,FIN flags set. HAProxy
> diligently responds with FIN,ACK and closes the client connection, not
> sending anything to the backend server. It also logs the request as CR--
> which seems proper.
> 
> There is nothing specific in my HAProxy setup in this case, default config
> with a single frontend/backend. I'm on 1.8 series.

It is *not* supposed to happen by default, we're extremely careful about
half-closed requests, and in fact in 1.8, HTTP/2 is translated to HTTP/1
with a close appended just after the request. Are you up to date with your
1.8 version ? Also are you certain your config doesn't contain
"option abortonclose" ? This option exists exactly to cause the behaviour
you're observing.

> I'm aware that sending a FIN on a TLS connection violates the TLS RFC, but
> I suspect that such a client would do the same on a non-TLS connection too.
> It brings me to the question, is there a way to tell HAProxy not to close a
> client connection upon receipt of FIN on the client side if there is an
> unserved HTTP request, but rather keep the client connection half-open
> until the backend responds?

The client connection is half-open in this case because that's how it is
at the transport level. But the request should properly be served regardless
of this.

> The issue I'm facing is likely exacerbated  by
> the fact that client sends FIN together with the HTTP request, so there is
> no backend connection yet when HAProxy decides to close the client
> connection.

As I said, it's exactly what is done for requests coming from HTTP/2, so
if it's neither a configuration issue nor a known bug, you may be on a
corner case that hasn't been identified yet.

> Any ideas will he much appreciated. I'm open to testing config
> changes/patches if needed (or even writing one with some guidance).

Great ;-) Please double-check the points above, and share a reduced
config exhibiting the problem.

Thanks!
Willy



Re: haproxy issue tracker discussion

2019-01-12 Thread Willy Tarreau
Hi Tim,

On Sat, Jan 12, 2019 at 12:24:23PM +0100, Tim Düsterhus wrote:
> > This one is less of a problem because the likelihood that someone writes
> > "fixes haproxy/haproxy/#4" in a commit message is particularly low, unless
> > they do it on purpose to annoy us of course.
> 
> It would only close on us, if the person creating the other commit had
> the necessary permissions in our issue tracker.

OK.

> This example makes me wonder, though: Should the various branches be
> separate repositories on GitHub like on haproxy.org or should they be
> actual branches (e.g. 1.8.x in haproxy/haproxy instead of
> haproxy/haproxy-1.8) for the mirror?

I've been thinking about this for a while in the past, which is the
reason why we purposely didn't upload the other branches yet. On the
one hand having all branches at once in a single repository is
convenient to retrieve everything at once. On the other hand, most
people cloning the repo are not interested in retrieving maintenance
versions for old branches. In addition, while at the moment and for
simplicity reasons, all stable maintainers do have write access to
the master repository, in the future we could imagine delegating some
outdated branches to certain contributors who still need to maintain
them for whatever reason, and in this case we might not feel much at
ease with granting them a write access to all other repos and have to
face their occasional mistakes.

This matches perfectly the situation I've been into with the kernel:
I inherited old ones and other developers didn't have to be scared by
my mistakes because I could only affect the part I was responsible for.
As such it makes me think that it's probably better to only have the
master branch in the main repo, and create separate repos for the
maintenance branches.

> The former would require referencing issues as haproxy/haproxy#123. The
> latter allows for a more simple #123.

Given that we'd prefer *not* to see them automatically closed, I don't
see this as a limitation, on the opposite!

> > I'm wondering if based on the previous work you did on the
> > pull requests using the API it would be possible to :
> >   - reopen closed issues that still have a "*-affected" or "needs triage" 
> > label
> >   - close issues that don't have "*-affected" nor "needs triage"
> > 
> > In this case it would mean we just need to use the labels and not care
> > about the state at all.
> 
> Yes. You can pull issues with a specific state and a specific label from
> the API and you can receive webhooks when an issue is being opened or
> closed.

Fine!

> The following GitHub v4 API query pulls issues with
> state == OPEN && (label == "1.6-affected" || label == "enhancement")
> for Lukas' test repository from the API. "enhancement" is to be replaced
> by the other "*-affected" labels.
> 
> {
>   repository(owner: "lukastribus", name: "hap-issue-trial") {
> issues(states: [OPEN], labels: ["1.6-affected", "enhancement"],
> last: 100) {
>   nodes {
> title
>   }
> }
>   }
> }

Mind you that I don't have any idea what language this code block uses nor
where it has to be stuffed, so I'll trust you :-)

> >> - and issue and feature request template
> > 
> > For this one I have no idea since I don't know much how it works.
> 
> You need to a markdown file to the .github/ISSUE_TEMPLATE folder in the
> repository:
> 
> https://github.com/lukastribus/hap-issue-trial/blob/master/.github/ISSUE_TEMPLATE/Bug.md
> 
> see:
> https://help.github.com/articles/manually-creating-a-single-issue-template-for-your-repository/

OK, that's manageable I guess, thanks. I find it a bit annoying that it uses
the code repository to store the github interface configuration but aside
this it's probably OK.

> >> status: pending-backport
> > 
> > I think this one is implied by the presence of "affects:"
> 
> Not necessarily. "affects" without "pending-backport" probably needs
> more work finding the issue first, while "pending-backport" can be as
> easy as a cherry-pick.

OK I see your point but in this case the dev branch is fixed,
"affects: dev" is not there anymore then pending-backport is implied
by ("affects: 1.x" and not "affects: dev"). This even works for bugs
that only affect older branches. It's even better as we have always
implied that a backport to a branch is not welcome until the next
branch is fixed. Thus the "backport-pending" for a branch is always
deduced from "affects: branch" and not "affects:next branch".

Willy



Re: Replicated stick tables have absurd values for conn_cur

2019-01-12 Thread Tim Düsterhus
Emerson,

Am 07.01.19 um 13:40 schrieb Emerson Gomes:
> Just to update you, I have tried the patch, and while I didnt see any new
> occurences of the underflow, HAProxy started to crash constantly...
> 
> Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
> Current worker #1 (14366) exited with code 139 (Segmentation fault)
> Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
> exit-on-failure: killing every workers with SIGTERM
> Jan 07 10:32:37 afrodite haproxy[14364]: [WARNING] 006/103237 (14364) : All
> workers exited. Exiting... (139)
> 
> I am not sure if the segfaults are related to the patch - Continuing
> investigation...
> 

I only checked whether my patch compiled successfully, but not whether
it actually worked. I did not find the time to take a deeper look yet,
I'm afraid.

Did you find out, whether the segfaults are caused by the patch and
where exactly it segfaults?

Best regards
Tim Düsterhus



Re: haproxy issue tracker discussion

2019-01-12 Thread Tim Düsterhus
Willy,

Am 12.01.19 um 08:47 schrieb Willy Tarreau:
>> The following issue was closed ... :
>> https://github.com/lukastribus/hap-issue-trial/issues/3
>>
>> from another repository, just because I referenced the issue
>> prepending the word "Fix":
>> https://github.com/lukastribus/hap-issue-trial-1.8/commit/91a0776fec856766c64b8f3a34a796718c2368c1
> 
> This one is less of a problem because the likelihood that someone writes
> "fixes haproxy/haproxy/#4" in a commit message is particularly low, unless
> they do it on purpose to annoy us of course.

It would only close on us, if the person creating the other commit had
the necessary permissions in our issue tracker.

This example makes me wonder, though: Should the various branches be
separate repositories on GitHub like on haproxy.org or should they be
actual branches (e.g. 1.8.x in haproxy/haproxy instead of
haproxy/haproxy-1.8) for the mirror?

The former would require referencing issues as haproxy/haproxy#123. The
latter allows for a more simple #123.

>> As our intention I believe is to keep the issue open until all
>> affected branches are fixed, this github feature is a little
>> inconvenient. But I guess we can just refer to the issue by prepending
>> it with "issue" or "bug", so GH doesn't see "Fix". Still it feels like
>> we are working against the system.
> 
> As often yes. I'm wondering if based on the previous work you did on the
> pull requests using the API it would be possible to :
>   - reopen closed issues that still have a "*-affected" or "needs triage" 
> label
>   - close issues that don't have "*-affected" nor "needs triage"
> 
> In this case it would mean we just need to use the labels and not care
> about the state at all.

Yes. You can pull issues with a specific state and a specific label from
the API and you can receive webhooks when an issue is being opened or
closed.

The following GitHub v4 API query pulls issues with
state == OPEN && (label == "1.6-affected" || label == "enhancement")
for Lukas' test repository from the API. "enhancement" is to be replaced
by the other "*-affected" labels.

{
  repository(owner: "lukastribus", name: "hap-issue-trial") {
issues(states: [OPEN], labels: ["1.6-affected", "enhancement"],
last: 100) {
  nodes {
title
  }
}
  }
}

>> - and issue and feature request template
> 
> For this one I have no idea since I don't know much how it works.

You need to a markdown file to the .github/ISSUE_TEMPLATE folder in the
repository:

https://github.com/lukastribus/hap-issue-trial/blob/master/.github/ISSUE_TEMPLATE/Bug.md

see:
https://help.github.com/articles/manually-creating-a-single-issue-template-for-your-repository/

>> status: pending-backport
> 
> I think this one is implied by the presence of "affects:"

Not necessarily. "affects" without "pending-backport" probably needs
more work finding the issue first, while "pending-backport" can be as
easy as a cherry-pick.

Best regards
Tim Düsterhus



Re: haproxy issue tracker discussion

2019-01-12 Thread Tim Düsterhus
Lukas,

Am 12.01.19 um 02:53 schrieb Lukas Tribus:
> As our intention I believe is to keep the issue open until all
> affected branches are fixed, this github feature is a little
> inconvenient. But I guess we can just refer to the issue by prepending
> it with "issue" or "bug", so GH doesn't see "Fix". Still it feels like
> we are working against the system.

Simply doing "see #123" / "ref #123" instead of "Fixes #123" probably
would work as well. The rest of the message probably makes it clear that
it's a fix.

> - a rough consensus of the process (like the sequence above)

It's looking good. I believe the initial "needs-triage" label can be
added using the template:
https://help.github.com/articles/manually-creating-a-single-issue-template-for-your-repository/
(step 5).

Best regards
Tim Düsterhus