Re: haproxy 2.2-dev8-7867525 - 100% cpu usage on 1 core after config 'reload'

2020-05-29 Thread PiBa-NL

Hi Christopher,

Op 29-5-2020 om 09:00 schreef Christopher Faulet:

Le 29/05/2020 à 00:45, PiBa-NL a écrit :

Hi List,

I noticed a issue with 2.2-dev8-release and with 2.2-dev8-7867525 the
issue is still there that when a reload is 'requested' it fails to stop
the old worker..



Hi Pieter,

I was able to reproduce the bug. Thanks for the reproducer. I've fixed 
it. It should be ok now.



Thanks for the quick fix! It works for me.

Regards,
PiBa-NL (Pieter)




haproxy 2.2-dev8-7867525 - 100% cpu usage on 1 core after config 'reload'

2020-05-28 Thread PiBa-NL

Hi List,

I noticed a issue with 2.2-dev8-release and with 2.2-dev8-7867525 the 
issue is still there that when a reload is 'requested' it fails to stop 
the old worker.. The old worker shuts down most of its threads, but 1 
thread  starts running at 100% cpu usage of a core. Not sure yet 'when' 
the issue was introduced exactly.. Ive skiped quite a few dev releases 
and didnt have time to disect it to a specific version/commit yet. Ill 
try and do that during the weekend i noone does it earlier ;)..


Normally dont use -W but am 'manually' restarting haproxy with -sf 
parameters.. but this seemed like the easier reproduction..
Also i 'think' i noticed once that dispite the -W parameter and logging 
output that a worker was spawned that there was only 1 process running, 
but couldnt reproduce that one  sofar again... Also i havnt tried to see 
if and how i can connect through the master to the old worker process 
yet... perhaps also something i can try later..
I 'suspect' it has something to do with the healthchecks though... (and 
their refactoring as i think happened.?.)


Anyhow perhaps this is already enough for someone to take a closer look.?
If more info is needed ill try and provide :).

Regards,
PiBa-NL (Pieter)

*Reproduction (works 99% of the time..):*
  haproxy -W -f /var/etc/haproxy-2020/haproxy.cfg
  kill -s USR2 17683

*haproxy.cfg*
frontend www
    bind            127.0.0.1:81
    mode            http
backend testVPS_ipv4
    mode            http
    retries            3
    option            httpchk OPTIONS /Test HTTP/1.1\r\nHost:\ test.test.nl
    server            vps2a 192.168.30.10:80 id 10109 check inter 15000
backend O365mailrelay
    mode            tcp
    option            smtpchk HELO
    no option log-health-checks
    server-template            O365smtp 2 
test.mail.protection.outlook.com:25 id 122 check inter 1


*haproxy -vv*
HA-Proxy version 2.2-dev8-7867525 2020/05/28 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: FreeBSD 11.1-RELEASE FreeBSD 11.1-RELEASE #0 r321309: Fri 
Jul 21 02:08:28 UTC 2017 
r...@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-null-dereference -Wno-unused-label 
-Wno-unused-parameter -Wno-sign-compare -Wno-ignored-qualifiers 
-Wno-unused-command-line-argument -Wno-missing-field-initializers 
-Wno-address-of-packed-member -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_STATIC_PCRE=1 
USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ACCEPT4=1 USE_ZLIB=1


Feature list : -EPOLL +KQUEUE -NETFILTER +PCRE +PCRE_JIT -PCRE2 
-PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -BACKTRACE 
+STATIC_PCRE -STATIC_PCRE2 +TPROXY -LINUX_TPROXY -LINUX_SPLICE +LIBCRYPT 
-CRYPT_H +GETADDRINFO +OPENSSL +LUA -FUTEX +ACCEPT4 +ZLIB -SLZ 
+CPU_AFFINITY -TFO -NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD 
-OBSOLETE_LINKER -PRCTL -THREAD_DUMP -EVPORTS


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=16).
Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with clang compiler version 4.0.0 (tags/RELEASE_400/final 297347)
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")


Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
    fcgi : mode=HTTP   side=BE    mux=FCGI
    : mode=HTTP   side=FE|BE mux=H1
    : mode=TCP    side=FE|BE mux=PASS

Available services : none

Available filters :
    [SPOE] spoe
    [CACHE] cache
    [FCGI] fcgi-app
    [TRACE] trace
    [COMP] compression




Re: disabling test if ipv6 not supported ?

2020-05-21 Thread PiBa-NL

Hi Ilya,
Op 21-5-2020 om 04:57 schreef Илья Шипицин:

Hello,

seems, freebsd images on cirrus-ci run with no ipv6 support
https://cirrus-ci.com/task/6613883307687936

It fails on srv3 configuration, any ideay why doesn't it complain about 
srv2 as that also seems to me it uses IPv6..?


any idea how we can skip such tests ?


I think the test or code should get fixed.. not skipped because it fails.

Note that this specific test recently got this extra 'addr ::1' server 
check parameter on srv3. Perhaps that that syntax is written/parsed wrongly?




Cheers,
Ilya Shipitcin

Regards, PiBa-NL (Pieter)





Re: server-state application failed for server 'x/y', invalid srv_admin_state value '32'

2020-04-06 Thread PiBa-NL

Hi Baptiste,

Op 6-4-2020 om 11:43 schreef Baptiste:

Hi Piba,

my answers inline.

Using 2.2-dev5-c3500c3, I've got both a server and a
servertemplate/server that are marked 'down' due to dns not replying
with (enough) records. That by itself is alright.. (and likely has
been
like that for a while so i don't think its a regression.)


You're right, this has always been like that.
For the 'regression part' i was thinking about the warnings below which 
where 'likely' like that before 2.2-dev5 as well, it wasn't about 
marking servers as down which is totally expected & desired ;) i seem to 
read from your response here like you thought i thought 
otherwise...(damn that gets hard to understand, sorry..) And as you have 
confirmed its already causing the warnings in 1.8 as well.. So not a 
regression in 2.2-dev5 itself.


But when i perform a 'seemless reload' with a serverstates file it
causes the warnings below for both server and template.:
[WARNING] 095/150909 (74796) : server-state application failed for
server 'x/y', invalid srv_admin_state value '32'
[WARNING] 095/150909 (74796) : server-state application failed for
server 'x2/z3', invalid srv_admin_state value '32'

Is there a way to get rid of these warnings, and if 32 is a invalid
value, how did it get into the state file at all?


I can confirm this is not supposed to happen!
And I could reproduce this behavior since HAProxy 1.8.

Not sure if its a bug or a feature request, but i do think it
should be
changed :). Can it be added to some todo list? Thanks.


This is a bug from my point of view.
I'll check this.
Could you please open a github issue and tag me in there?

Done: https://github.com/haproxy/haproxy/issues/576


Baptiste


Thanks and regards,
PiBa-NL (Pieter)



server-state application failed for server 'x/y', invalid srv_admin_state value '32'

2020-04-05 Thread PiBa-NL

Hi List,

Using 2.2-dev5-c3500c3, I've got both a server and a 
servertemplate/server that are marked 'down' due to dns not replying 
with (enough) records. That by itself is alright.. (and likely has been 
like that for a while so i don't think its a regression.)


But when i perform a 'seemless reload' with a serverstates file it 
causes the warnings below for both server and template.:
[WARNING] 095/150909 (74796) : server-state application failed for 
server 'x/y', invalid srv_admin_state value '32'
[WARNING] 095/150909 (74796) : server-state application failed for 
server 'x2/z3', invalid srv_admin_state value '32'


Is there a way to get rid of these warnings, and if 32 is a invalid 
value, how did it get into the state file at all?


## Severely cut down config snippet..:
backend x
server            y AppSrv:8084 id 161 check inter 1  weight 1 
resolvers globalresolvers

backend x2
server-template    z 3 smtp.company.tld:25 id 167 check inter 1  
weight 10 resolvers globalresolvers


One could argue that my backend x should have a better dns name 
configured, if it doesn't exists i apparently messed up something..
For the second x2 backend though isn't it 'normal' to have a template 
sizing account for 'future growth' of the cluster? And as such always 
have some extra template-servers available that are in 'MAINT / 
resolution' state? As such when 2 servers of x2 are up, and the 3rd is 
in resolution state, it shouldn't warn on a restart imho as its to be 
expected for most setups.?.


Not sure if its a bug or a feature request, but i do think it should be 
changed :). Can it be added to some todo list? Thanks.


Thanks and regards,
PiBa-NL (Pieter)




Re: [PATCHES] dns related

2020-03-26 Thread PiBa-NL

Hi Baptiste,

Op 26-3-2020 om 12:46 schreef William Lallemand:

On Wed, Mar 25, 2020 at 11:15:37AM +0100, Baptiste wrote:

Hi there,

A couple of patches here to cleanup and fix some bugs introduced
by 13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b.

Baptiste

Thanks Baptiste, merged.



Thanks for this one.
Question though, are you still working on making a non-existing server 
template go into 'resolution' state?


See below/attached picture with some details.. (or see 
https://www.mail-archive.com/haproxy@formilux.org/msg36373.html )

Regards,
PiBa-NL (Pieter)



Re: commit 493d9dc makes a SVN-checkout stall..

2020-03-25 Thread PiBa-NL

Hi Olivier, Willy,

Just to confirm, as expected it (c3500c3) indeed works for me :).
Thanks for the quick fix!

Regards,
PiBa-NL (Pieter)

Op 25-3-2020 om 17:16 schreef Willy Tarreau:

On Wed, Mar 25, 2020 at 05:08:03PM +0100, Olivier Houchard wrote:

That is... interesting, not sure I reached such an outstanding result.

Oh I stopped trying to guess long ago :-)


This is now fixed, sorry about that !

Confirmed, much better now, thanks!

Willy





commit 493d9dc makes a SVN-checkout stall..

2020-03-24 Thread PiBa-NL

Hi List, Willy,

Today i thought lets give v2.2-dev5 a try for my production environment ;).
Soon it turned out to cause SVN-Checkout to stall/disconnect for a 
repository we run locally in a Collab-SVN server.


I tracked it down to this commit: 493d9dc (MEDIUM: mux-h1: do not 
blindly wake up the tasklet at end of request anymore) causing the 
problem for the first time. Is there something tricky there that can be 
suspected to cause the issue.? Perhaps a patch i can try?


While 'dissecting' the issue i deleted the whole directory each time and 
performed a new svn-checkout several times. It doesn't always stall at 
the exact same point but usually after checking out around +- 20 files 
something between 0.5 and 2 MB. , the commit before that one allows me 
to checkout 500+MB through haproxy without issue.. A wireshark seems to 
show that haproxy is sending several of RST,ACK packets for a 4 
different connections to the svn-server at the same milisecond after it 
was quiet for 2 seconds.. The whole issue happens in a timeframe of 
start of checkout till when it stalls within 15 seconds.


The 'nokqueue' i usually try on my FreeBSD machine doesn't change anything.

Hope you have an idea where to look. Providing captures/logs is a bit 
difficult without some careful scrubbing..


Regards,
PiBa-NL (Pieter)

### Complete config (that still reproduces the issue.. things cant get 
much simpler than this..):

frontend InternalSites.8.6-merged
    bind            192.168.8.67:80
    mode            http
    use_backend APP01-JIRA-SVN_ipvANY

backend APP01-JIRA-SVN_ipvANY
    mode            http
    server            svn 192.168.104.20:8080

### uname -a
FreeBSD freebsd11 11.1-RELEASE FreeBSD 11.1-RELEASE #0 r321309: Fri Jul 
21 02:08:28 UTC 2017 
r...@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64


### haproxy -vv
HA-Proxy version 2.2-dev5-3e128fe 2020/03/24 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-null-dereference -Wno-unused-label 
-Wno-unused-parameter -Wno-sign-compare -Wno-ignored-qualifiers 
-Wno-unused-command-line-argument -Wno-missing-field-initializers 
-Wno-address-of-packed-member -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_STATIC_PCRE=1 
USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ACCEPT4=1 USE_ZLIB=1


Feature list : -EPOLL +KQUEUE -NETFILTER +PCRE +PCRE_JIT -PCRE2 
-PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -BACKTRACE 
+STATIC_PCRE -STATIC_PCRE2 +TPROXY -LINUX_TPROXY -LINUX_SPLICE +LIBCRYPT 
-CRYPT_H +GETADDRINFO +OPENSSL +LUA -FUTEX +ACCEPT4 +ZLIB -SLZ 
+CPU_AFFINITY -TFO -NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD 
-OBSOLETE_LINKER -PRCTL -THREAD_DUMP -EVPORTS


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=16).
Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")


Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
    fcgi : mode=HTTP   side=BE    mux=FCGI
    : mode=HTTP   side=FE|BE mux=H1
    : mode=TCP    side=FE|BE mux=PASS

Available services : none

Available filters :
    [SPOE] spoe
    [CACHE] cache
    [FCGI] fcgi-app
    [TRACE] trace
    [COMP] compression




Re: dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)

2020-02-19 Thread PiBa-NL

Hi Baptiste,

Op 19-2-2020 om 13:06 schreef Baptiste:

Hi,

I found a couple of bugs in that part of the code.
Can you please try the attached patch? (0001 is useless but I share it 
too in case of)

Works for me, thanks!
It will allow parsing of additional records for SRV queries only and 
when done, will silently ignore any record which are not A or .


@maint team, please don't apply the patch yet, I want to test it much 
more before.



When the final patch is ready ill be happy to give it a try as well.

Baptiste

On a side note. With config below i would expect 2 servers with status 
'MAINT(resolving)'.


Using this configuration in Unbound (4 server IP's defined.):
server:
local-data: "_https._tcp.pkg.test.tld 3600 IN SRV 0 100 80 srv1.test.tld"
local-data: "_https._tcp.pkg.test.tld 3600 IN SRV 0 100 80 srv2.test.tld"
local-data: "srv1.test.tld 3600 IN A 192.168.0.51"
local-data: "srv2.test.tld 3600 IN A 192.168.0.52"
local-data: "srvX.test.tld 3600 IN A 192.168.0.53"
local-data: "srvX.test.tld 3600 IN A 192.168.0.54"

And this in a HAProxy backend:
    server-template            PB_SRVrecords 3 
ipv4@_https._tcp.pkg.test.tld:77 id 10110 check inter 18 resolvers 
globalresolvers resolve-prefer ipv4
    server-template            PB_multipleA 3 i...@srvx.test.tld:78 id 
10111 check inter 18  resolvers globalresolvers resolve-prefer 
ipv4Results in 6 servers, but 1 is


This results in 6 servers of which 1 server has 'MAINT(resolution)' 
status and 1 has an IP of 0.0.0.0 but shows as 'DOWN'. I would have 
expected 2 servers with status MAINT.?
(p.s. none of the IP's actually exist on my network so that the other 
servers are also shown as down is correct..)


PB_ipv4,PB_SRVrecords1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,124,124,,1,10102,10110,,0,,2,0,,0,L4CON,,74995,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.51:80,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_SRVrecords2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,94,94,,1,10102,2,,0,,2,0,,0,L4CON,,75029,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.52:80,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_SRVrecords3,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,64,64,,1,10102,3,,0,,2,0,,0,L4CON,,75039,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection problem,,2,3,00.0.0.0:77,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_multipleA1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,2,34,34,,1,10102,10111,,0,,2,0,,0,L4CON,,75002,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.53:78,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_multipleA2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,2,4,4,,1,10102,5,,0,,2,0,,0,L4CON,,75014,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.54:78,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_multipleA3,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT 
(resolution),1,1,0,0,1,199,199,,1,10102,6,,0,,2,0,,00,0,0,0,0,0,0,0,-1,,,0,0,0,00.0.0.0:78,,http0,0,0,,,0,,0,0,0,0,0,


If additional info is desired, please let me know :).

On Tue, Feb 18, 2020 at 2:03 PM Baptiste <mailto:bed...@gmail.com>> wrote:


Hi guys,

Thx Tim for investigating.
I'll check the PCAP and see why such behavior happens.

Baptiste


On Tue, Feb 18, 2020 at 12:09 AM Tim Düsterhus mailto:t...@bastelstu.be>> wrote:

Pieter,

Am 09.02.20 um 15:35 schrieb PiBa-NL:
> Before commit '2.2-dev0-13a9232, released 2020/01/22 (use
additional
> records from SRV responses)' i get seemingly proper working
resolving of
> server a name.
> After this commit all responses are counted as 'invalid' in
the socket
> stats.

I can confirm the issue with the provided configuration. The
'if (len ==
0) {' check in line 1045 of the commit causes HAProxy to
consider the
responses 'invalid':


Thanks for confirming :).




https://github.com/haproxy/haproxy/commit/13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b#diff-b2ddf457bc423779995466f7d8b9d147R1045-R1048

Best regards
Tim Düsterhus


Regards,
PiBa-NL (Pieter)




Re: dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)

2020-02-17 Thread PiBa-NL

Hi List,
Hereby a little bump. Can someone take a look?
(maybe the pcap attachment didn't fly well through spam filters. (or the 
email formatting..)?)
(or because i (wrongly?) chose to include Baptiste specifically in my 
addressing (he committed the original patch that caused the change in 
behaviour)..)


Anyhow the current '2.2-dev2-a71667c, released 2020/02/17' is still 
affected.


If someone was already planning to, please don't feel 'pushed' by this 
mail. i'm just trying to make sure this doesn't fall through the cracks :).

Regards,
PiBa-NL (Pieter)

Op 9-2-2020 om 15:35 schreef PiBa-NL:

Hi List, Baptiste,

After updating haproxy i found that the DNS resolver is no longer 
working for me. Also i wonder about the exact effect that 'hold valid' 
should have.
I pointed haproxy to a 'Unbound 1.9.4' dns server that does the 
recursive resolving of the dns request made by haproxy.


Before commit '2.2-dev0-13a9232, released 2020/01/22 (use additional 
records from SRV responses)' i get seemingly proper working resolving 
of server a name.
After this commit all responses are counted as 'invalid' in the socket 
stats.


Attached also a pcap of the dns traffic. Which shows a short capture 
of a single attempt where 3 retries for both A and  records show 
up. There is a additional record of type 'OPT' is present in the 
response.. But the exact same keeps repeating every 5 seconds.
As for 'hold valid' (tested with the commit before this one) it seems 
that the stats page of haproxy shows the server in 'resolution' status 
way before the 3 minute 'hold valid' has passed when i simply 
disconnect the network of the server running the Unbound-DNS server. 
Though i guess that is less important that dns working at all in the 
first place..


If any additional information is needed please let me know :).

Can you/someone take a look? Thanks in advance.

p.s. i think i read something about a 'vtest' that can test the 
haproxy DNS functionality, if you have a example that does this i 
would be happy to provide a vtest with a reproduction of the issue 
though i guess it will be kinda 'slow' if it needs to test for hold 
valid timings..


Regards,
PiBa-NL (Pieter)

 haproxy config:

resolvers globalresolvers
    nameserver pfs_routerbox 192.168.0.18:53
    resolve_retries 3
    timeout retry 200
    hold valid 3m
    hold nx 10s
    hold other 15s
    hold refused 20s
    hold timeout 25s
    hold obsolete 30s
    timeout resolve 5s

frontend nu_nl
    bind            192.168.0.19:433 name 192.168.0.19:433   ssl 
crt-list /var/etc/haproxy/nu_nl.crt_list

    mode            http
    log            global
    option            http-keep-alive
    timeout client        3
    use_backend nu.nl_ipvANY

backend nu.nl_ipvANY
    mode            http
    id            2113
    log            global
    timeout connect        3
    timeout server        3
    retries            3
    option            httpchk GET / HTTP/1.0\r\nHost:\ 
nu.nl\r\nAccept:\ */*
    server            nu_nl nu.nl:443 id 2114 ssl check inter 1  
verify none resolvers globalresolvers check-sni nu.nl resolve-prefer ipv4



 haproxy_socket.sh show resolvers
Resolvers section globalresolvers
 nameserver pfs_routerbox:
  sent:    216
  snd_error:   0
  valid:   0
  update:  0
  cname:   0
  cname_error: 0
  any_err: 108
  nx:  0
  timeout: 0
  refused: 0
  other:   0
  invalid: 108
  too_big: 0
  truncated:   0
  outdated:    0

 haproxy -vv
HA-Proxy version 2.2-dev0-13a9232 2020/01/22 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-null-dereference -Wno-unused-label 
-Wno-unused-parameter -Wno-sign-compare -Wno-ignored-qualifiers 
-Wno-unused-command-line-argument -Wno-missing-field-initializers 
-Wno-address-of-packed-member -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_REGPARM=1 USE_STATIC_PCRE=1 
USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ACCEPT4=1 USE_ZLIB=1


Feature list : -EPOLL +KQUEUE -MY_EPOLL -MY_SPLICE -NETFILTER +PCRE 
+PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD 
-PTHREAD_PSHARED +REGPARM +STATIC_PCRE -STATIC_PCRE2 +TPROXY 
-LINUX_TPROXY -LINUX_SPLICE +LIBCRYPT -CRYPT_H -VSYSCALL +GETADDRINFO 
+OPENSSL +LUA -FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY 
-TFO -NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD 
-OBSOLETE_LINKER -PRCTL -THREAD_DUMP -EVPORTS


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with mult

dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)

2020-02-09 Thread PiBa-NL

Hi List, Baptiste,

After updating haproxy i found that the DNS resolver is no longer 
working for me. Also i wonder about the exact effect that 'hold valid' 
should have.
I pointed haproxy to a 'Unbound 1.9.4' dns server that does the 
recursive resolving of the dns request made by haproxy.


Before commit '2.2-dev0-13a9232, released 2020/01/22 (use additional 
records from SRV responses)' i get seemingly proper working resolving of 
server a name.
After this commit all responses are counted as 'invalid' in the socket 
stats.


Attached also a pcap of the dns traffic. Which shows a short capture of 
a single attempt where 3 retries for both A and  records show up. 
There is a additional record of type 'OPT' is present in the response.. 
But the exact same keeps repeating every 5 seconds.
As for 'hold valid' (tested with the commit before this one) it seems 
that the stats page of haproxy shows the server in 'resolution' status 
way before the 3 minute 'hold valid' has passed when i simply disconnect 
the network of the server running the Unbound-DNS server. Though i guess 
that is less important that dns working at all in the first place..


If any additional information is needed please let me know :).

Can you/someone take a look? Thanks in advance.

p.s. i think i read something about a 'vtest' that can test the haproxy 
DNS functionality, if you have a example that does this i would be happy 
to provide a vtest with a reproduction of the issue though i guess it 
will be kinda 'slow' if it needs to test for hold valid timings..


Regards,
PiBa-NL (Pieter)

 haproxy config:

resolvers globalresolvers
    nameserver pfs_routerbox 192.168.0.18:53
    resolve_retries 3
    timeout retry 200
    hold valid 3m
    hold nx 10s
    hold other 15s
    hold refused 20s
    hold timeout 25s
    hold obsolete 30s
    timeout resolve 5s

frontend nu_nl
    bind            192.168.0.19:433 name 192.168.0.19:433   ssl 
crt-list /var/etc/haproxy/nu_nl.crt_list

    mode            http
    log            global
    option            http-keep-alive
    timeout client        3
    use_backend nu.nl_ipvANY

backend nu.nl_ipvANY
    mode            http
    id            2113
    log            global
    timeout connect        3
    timeout server        3
    retries            3
    option            httpchk GET / HTTP/1.0\r\nHost:\ 
nu.nl\r\nAccept:\ */*
    server            nu_nl nu.nl:443 id 2114 ssl check inter 1  
verify none resolvers globalresolvers check-sni nu.nl resolve-prefer ipv4



 haproxy_socket.sh show resolvers
Resolvers section globalresolvers
 nameserver pfs_routerbox:
  sent:    216
  snd_error:   0
  valid:   0
  update:  0
  cname:   0
  cname_error: 0
  any_err: 108
  nx:  0
  timeout: 0
  refused: 0
  other:   0
  invalid: 108
  too_big: 0
  truncated:   0
  outdated:    0

 haproxy -vv
HA-Proxy version 2.2-dev0-13a9232 2020/01/22 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-null-dereference -Wno-unused-label 
-Wno-unused-parameter -Wno-sign-compare -Wno-ignored-qualifiers 
-Wno-unused-command-line-argument -Wno-missing-field-initializers 
-Wno-address-of-packed-member -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_REGPARM=1 USE_STATIC_PCRE=1 
USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ACCEPT4=1 USE_ZLIB=1


Feature list : -EPOLL +KQUEUE -MY_EPOLL -MY_SPLICE -NETFILTER +PCRE 
+PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD 
-PTHREAD_PSHARED +REGPARM +STATIC_PCRE -STATIC_PCRE2 +TPROXY 
-LINUX_TPROXY -LINUX_SPLICE +LIBCRYPT -CRYPT_H -VSYSCALL +GETADDRINFO 
+OPENSSL +LUA -FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY -TFO 
-NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER 
-PRCTL -THREAD_DUMP -EVPORTS


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1a-freebsd  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.1a-freebsd  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with PCRE version : 8.43 2019-02-23
Running on PCRE version : 8.43 2019-02-23
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib versio

mcli vtest broken by commit.?. MEDIUM: connections: Get ride of the xprt_done callback.

2020-01-22 Thread PiBa-NL

Hi Olivier,

Just to let you know, seems this commit has broken a few regtests:
http://git.haproxy.org/?p=haproxy.git;a=commit;h=477902bd2e8c1e978ad43d22dba1f28525bb797a

https://api.cirrus-ci.com/v1/task/5885732300521472/logs/main.log
Testing with haproxy version: 2.2-dev1
#top  TEST reg-tests/mcli/mcli_show_info.vtc TIMED OUT (kill -9)
#top  TEST reg-tests/mcli/mcli_show_info.vtc FAILED (10.044) signal=9
#top  TEST reg-tests/mcli/mcli_start_progs.vtc TIMED OUT (kill -9)
#top  TEST reg-tests/mcli/mcli_start_progs.vtc FAILED (10.019) signal=9

Can reproduce it on my own FreeBSD machine as well, the testcase just sits and 
waits.. until the vtest-timeout strikes.

Do you need more info? If so what can i provide.?

Regards,
Pieter




Re: freebsd ci is broken - commit 08fa16e - curl download stalls in reg-tests/compression/lua_validation.vtc

2020-01-15 Thread PiBa-NL

Hi Olivier, Willy, Ilya,

Thanks! I confirm 2.2-dev0-ac81474 fixes this issue for me. And 
cirrus-ci also shows 'all green' again :).


Running the same test with 16vCPU and kqueue enabled its 'all okay': 0 
tests failed, 0 tests skipped, 200 tests passed.


Op 15-1-2020 om 19:20 schreef Olivier Houchard:

Hi guys,

On Tue, Jan 14, 2020 at 09:45:34PM +0100, Willy Tarreau wrote:

Hi guys,

On Tue, Jan 14, 2020 at 08:02:51PM +0100, PiBa-NL wrote:

Below a part of the output that the test generates for me. The first curl
request seems to succeed, but the second one runs into a timeout..
When compiled with the commit before 08fa16e 
<https://github.com/haproxy/haproxy/commit/08fa16e397ffb1c6511b98ade2a3bfff9435e521>

Ah, and unsurprizingly I'm the author :-/

I'm wondering why it only affects FreeBSD (very likely kqueue in fact, I
suppose it works if you start with -dk). Maybe something subtle escaped
me in the poller after the previous changes.


Should i update to a newer FreeBSD version, or is it likely unrelated, and
in need of some developer attention.. Do you (Willy or anyone), need more
information from my side? Or is there a patch i can try to validate?

I don't think I need more info for now and your version has nothing to do
with this (until proven otherwise). I apparently really broke something
there. I think I have a FreeBSD VM somewhere, in the worst case I'll ask
Olivier for some help :-)


To give you a quick update, we investigating that, and I'm still not really
sure why it only affects FreeBSD, but we fully understood the problem, and
it should be fixed by now.

Regards,

Olivier


Regards,
PiBa-NL (Pieter)




Re: freebsd ci is broken - commit 08fa16e - curl download stalls in reg-tests/compression/lua_validation.vtc

2020-01-14 Thread PiBa-NL

Hi Ilya, Willy,

Op 14-1-2020 om 21:40 schreef Илья Шипицин:

PiBa, how many CPU cores are you running ?

it turned out that I run tests on very low vm, which only has 1 core. 
and tests pass.

cirrus-ci as far as I remember do have many cores.

I was running with 16 cores..


can you find single core vm ?


Well, i reconfigured the VM to have 1 core, but same issue seems to show 
up, though not on every time the test is run, and actually a bit less 
often..

Below some additional testresults with different kqueue / vCPU settings..


*VM with 1 vCPU*

Running: ./vtest/VTest-master/vtest -Dno-htx=no -l -k -b 50M -t 5 -n 20 
./work/haproxy-08fa16e/reg-tests/compression/lua_validation.vtc

  Results in: 4 tests failed, 0 tests skipped, 16 tests passed

Adding "nokqueue" in the vtc file i get:
  8 tests failed, 0 tests skipped, 12 tests passed
  4 tests failed, 0 tests skipped, 16 tests passed

So its a bit random, but the 'nokqueue' directive does not seem to 
affect results much..



*With 16 vCPU*
Without nokqueue: 16 tests failed, 0 tests skipped, 4 tests passed
With nokqueue (using poll): 17 tests failed, 0 tests skipped, 3 tests passed

The failure rate seems certainly higher with many cores..


* Using commit 0eae632 it works OK*
Just to be sure i re-tested on 16 cores with 2.2-dev0-0eae632 but that 
does nicely pass: 0 tests failed, 0 tests skipped, 20 tests passed


Regards,
PiBa-NL (Pieter)




Re: freebsd ci is broken - commit 08fa16e - curl download stalls in reg-tests/compression/lua_validation.vtc

2020-01-14 Thread PiBa-NL

Hi Ilya,

Thanks!

Op 14-1-2020 om 07:48 schreef Илья Шипицин:

Hello,

since
https://github.com/haproxy/haproxy/commit/08fa16e397ffb1c6511b98ade2a3bfff9435e521

freebsd CI is red: https://cirrus-ci.com/task/5960933184897024

I'd say "it is something with CI itself",  when I run the same tests 
locally on freebsd, it is green.
Sadly i do get the same problem on my test server (version info below 
its version 11.1 is a bit outdated, but hasn't failed my before...).


PiBa ?


thanks,
Ilya Shipitcin


Below a part of the output that the test generates for me. The first 
curl request seems to succeed, but the second one runs into a timeout..
When compiled with the commit before 08fa16e 
<https://github.com/haproxy/haproxy/commit/08fa16e397ffb1c6511b98ade2a3bfff9435e521> 
it does not show that behaviour.. Current latest(24c928c) commit is 
still affected..


 top  shell_out|  % Total    % Received % Xferd  Average Speed   
Time    Time Time  Current
 top  shell_out| Dload Upload   
Total   Spent    Left  Speed
 top  shell_out|\r  0 0    0 0    0 0  0 0 --:--:-- 
--:--:-- --:--:-- 0\r100  418k    0  418k    0 0  1908k  0 
--:--:-- --:--:-- --:--:-- 1908k
 top  shell_out|  % Total    % Received % Xferd  Average Speed   
Time    Time Time  Current
 top  shell_out| Dload Upload   
Total   Spent    Left  Speed
 top  shell_out|\r  0 0    0 0    0 0  0 0 --:--:-- 
--:--:-- --:--:-- 0\r100  141k    0  141k    0 0   284k  0 
--:--:-- --:--:-- --:--:--  284k\r100  343k    0 343k    0 0   
156k  0 --:--:--  0:00:02 --:--:-- 156k\r100  343k    0  343k    
0 0   105k  0 --:--:-- 0:00:03 --:--:--  105k\r100  343k    0  
343k    0 0 81274  0 --:--:--  0:00:04 --:--:-- 81274\r100  
343k    0 343k    0 0  65228  0 --:--:--  0:00:05 --:--:-- 
65240\r100  343k    0  343k    0 0  54481  0 --:--:-- 0:00:06 
--:--:-- 34743\r100  343k    0  343k    0 0 46768  0 --:--:--  
0:00:07 --:--:-- 0\r100  343k    0 343k    0 0  40968  0 
--:--:--  0:00:08 --:--:-- 0\r100  343k    0  343k    0 0  
36452  0 --:--:--  0:00:09 --:--:-- 0\r100  343k    0  343k    
0 0  32830  0 --:--:--  0:00:10 --:--:-- 0\r100  343k    0  
343k    0 0 29865  0 --:--:--  0:00:11 --:--:-- 0\r100  
343k    0 343k    0 0  27395  0 --:--:--  0:00:12 --:--:-- 
0\r100  343k    0  343k    0 0  25297  0 --:--:--  0:00:13 
--:--:-- 0\r100  343k    0  343k    0 0  23500  0 --:--:--  
0:00:14 --:--:-- 0\r100  343k    0  343k    0 0 23431  0 
--:--:--  0:00:15 --:--:-- 0
 top  shell_out|curl: (28) Operation timed out after 15002 
milliseconds with 351514 bytes received

 top  shell_out|Expecting checksum 4d9c62aa5370b8d5f84f17ec2e78f483
 top  shell_out|Received checksum: da2d120aedfd693eeba9cf1e578897a8
 top  shell_status = 0x0001
 top  shell_exit not as expected: got 0x0001 wanted 0x
*    top  RESETTING after 
./work/haproxy-08fa16e/reg-tests/compression/lua_validation.vtc


Should i update to a newer FreeBSD version, or is it likely unrelated, 
and in need of some developer attention.. Do you (Willy or anyone), need 
more information from my side? Or is there a patch i can try to validate?


Regards,
PiBa-NL (Pieter)


Yes im running a somewhat outdated OS here:
  FreeBSD freebsd11 11.1-RELEASE FreeBSD 11.1-RELEASE #0 r321309: Fri 
Jul 21 02:08:28 UTC 2017 
r...@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64


Version used:
  haproxy -vv
HA-Proxy version 2.2-dev0-08fa16e 2020/01/08 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-null-dereference -Wno-unused-label 
-Wno-unused-parameter -Wno-sign-compare -Wno-ignored-qualifiers 
-Wno-unused-command-line-argument -Wno-missing-field-initializers 
-Wno-address-of-packed-member -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_REGPARM=1 USE_STATIC_PCRE=1 
USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ACCEPT4=1 USE_ZLIB=1


Feature list : -EPOLL +KQUEUE -MY_EPOLL -MY_SPLICE -NETFILTER +PCRE 
+PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD 
-PTHREAD_PSHARED +REGPARM +STATIC_PCRE -STATIC_PCRE2 +TPROXY 
-LINUX_TPROXY -LINUX_SPLICE +LIBCRYPT -CRYPT_H -VSYSCALL +GETADDRINFO 
+OPENSSL +LUA -FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY -TFO 
-NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER 
-PRCTL -THREAD_DUMP -EVPORTS


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built wi

Re: commit 246c024 - breaks loading crt-list with .ocsp files present

2019-10-15 Thread PiBa-NL

Op 15-10-2019 om 13:52 schreef William Lallemand:

I pushed the fix.

Thanks


Fix confirmed. Thank you.




Re: freebsd builds are broken for few days - 30ee1ef, proxy_protocol_random_fail.vtc fails because scheme and host are now present in the syslog output.

2019-10-14 Thread PiBa-NL

Hi Christopher,

It seems you fixed/changed the issue i noticed below a few minutes ago 
in commit 452e578 :) , thanks.
One question remaining about this on my side is if it is expected that 
some platforms will use 'normalized' URI and others platforms just the 
regular / ?


Regards, PiBa-NL (Pieter)

Op 14-10-2019 om 21:22 schreef PiBa-NL:

Hi Ilya, Willy,

Op 13-10-2019 om 19:30 schreef Илья Шипицин:

https://cirrus-ci.com/github/haproxy/haproxy

I'll bisect if noone else knows what's going on


@IIlya, thanks for checking my favorite platform, FreeBSD ;).

@Willy, this 30ee1ef 
<http://git.haproxy.org/?p=haproxy.git;a=commit;h=30ee1ef> (MEDIUM: 
h2: use the normalized URI encoding for absolute form requests) commit 
'broke' the expect value of the vtest, i don't know why other 
platforms don't see the same change in syslog output though.. Anyhow 
this is the output i get when running the 
/reg-tests/connection/proxy_protocol_random_fail.vtc


 Slog_1  0.033 syslog|<134>Oct 14 19:56:34 haproxy[78982]: 
::1:47040 [14/Oct/2019:19:56:34.391] ssl-offload-http~ 
ssl-offload-http/http 0/0/0/0/0 503 222 - -  1/1/0/0/0 0/0 "POST 
https://[::1]:47037/1 HTTP/2.0"
**   Slog_1  0.033 === expect ~ "ssl-offload-http/http .* \"POST 
/[1-8] HTTP/(2\\.0...
 Slog_1  0.033 EXPECT FAILED ~ "ssl-offload-http/http .* "POST 
/[1-8] HTTP/(2\.0|1\.1)""


If i change the vtc vtest file from:
    expect ~ "ssl-offload-http/http .* \"POST /[1-8] 
HTTP/(2\\.0|1\\.1)\""

To:
    expect ~ "ssl-offload-http/http .* \"POST 
https://[[]::1]:[0-9]{1,5}/[1-8] HTTP/(2\\.0|1\\.1)\""

or:
    expect ~ "ssl-offload-http/http .* \"POST 
https://[[]${h1_ssl_addr}]:${h1_ssl_port}/[1-8] HTTP/(2\\.0|1\\.1)\""


Then the test succeeds for me... but now the question is, should or 
shouldn't the scheme and host be present in the syslog output on all 
platforms.? Or should the regex contain a (optional?) check for this 
extra part? (Also note that even with these added variables in my 
second regext attempt its still using accolades around the IPv6 
address.. not sure if all machines would use ipv6 for their localhost 
connection..)


Regards,
PiBa-NL (Pieter)






commit 246c024 - breaks loading crt-list with .ocsp files present

2019-10-14 Thread PiBa-NL

Hi William,

I'm having an issue with the latest master code 2.1-dev2-4a66013. It 
does compile but doesn't want to load my crt-list with .ocsp files 
present for the certificates mentioned. The commit that broke this is: 
246c024


# haproxy -v
HA-Proxy version 2.1-dev2-4a66013 2019/10/14 - https://haproxy.org/
# haproxy -f ./PB-TEST/ultimo_testcase/xxx/haproxy.cfg -d
[ALERT] 286/223026 (39111) : parsing 
[./PB-TEST/ultimo_testcase/xxx/haproxy.cfg:61] : 'bind 0.0.0.0:443' : 
'crt-list' : error processing line 1 in file 
'/usr/ports-pb_haproxy-devel/PB-TEST/ultimo_testcase/xxx/rtrcld.xxx.crt_list' 
: (null)
[ALERT] 286/223026 (39111) : Error(s) found in configuration file : 
./PB-TEST/ultimo_testcase/xxx/haproxy.cfg

[ALERT] 286/223026 (39111) : Fatal errors found in configuration.

Content of the crt-list file, but removing the alpn stuff doesn't help...:
/usr/ports-pb_haproxy-devel/PB-TEST/ultimo_testcase/xxx/rtrcld.xxx.pem [ 
alpn h2,http/1.1]
/usr/ports-pb_haproxy-devel/PB-TEST/ultimo_testcase/xxx/rtrcld.xxx/rtrcld.xxx_5ab0da70ab0cc.pem 
[ alpn h2,http/1.1]


The last line is an empty one.. but it already complains about line 1... 
which seems valid and the .pem file exists.. exact same config loads 
alright commits before this one: 246c024.


I do have a 'filled' .ocsp file present. But no matter if its outdated, 
empty or correct the error above stays. When the .ocsp is absent it 
complains about line 2 of the cert-list.. Which has its own .ocsp as well..


Can you take a look? Thanks in advance.

Regards,
PiBa-NL (Pieter)




Re: freebsd builds are broken for few days - 30ee1ef, proxy_protocol_random_fail.vtc fails because scheme and host are now present in the syslog output.

2019-10-14 Thread PiBa-NL

Hi Ilya, Willy,

Op 13-10-2019 om 19:30 schreef Илья Шипицин:

https://cirrus-ci.com/github/haproxy/haproxy

I'll bisect if noone else knows what's going on


@IIlya, thanks for checking my favorite platform, FreeBSD ;).

@Willy, this 30ee1ef 
<http://git.haproxy.org/?p=haproxy.git;a=commit;h=30ee1ef> (MEDIUM: h2: 
use the normalized URI encoding for absolute form requests) commit 
'broke' the expect value of the vtest, i don't know why other platforms 
don't see the same change in syslog output though.. Anyhow this is the 
output i get when running the 
/reg-tests/connection/proxy_protocol_random_fail.vtc


 Slog_1  0.033 syslog|<134>Oct 14 19:56:34 haproxy[78982]: ::1:47040 
[14/Oct/2019:19:56:34.391] ssl-offload-http~ ssl-offload-http/http 
0/0/0/0/0 503 222 - -  1/1/0/0/0 0/0 "POST https://[::1]:47037/1 
HTTP/2.0"
**   Slog_1  0.033 === expect ~ "ssl-offload-http/http .* \"POST /[1-8] 
HTTP/(2\\.0...
 Slog_1  0.033 EXPECT FAILED ~ "ssl-offload-http/http .* "POST 
/[1-8] HTTP/(2\.0|1\.1)""


If i change the vtc vtest file from:
    expect ~ "ssl-offload-http/http .* \"POST /[1-8] HTTP/(2\\.0|1\\.1)\""
To:
    expect ~ "ssl-offload-http/http .* \"POST 
https://[[]::1]:[0-9]{1,5}/[1-8] HTTP/(2\\.0|1\\.1)\""

or:
    expect ~ "ssl-offload-http/http .* \"POST 
https://[[]${h1_ssl_addr}]:${h1_ssl_port}/[1-8] HTTP/(2\\.0|1\\.1)\""


Then the test succeeds for me... but now the question is, should or 
shouldn't the scheme and host be present in the syslog output on all 
platforms.? Or should the regex contain a (optional?) check for this 
extra part? (Also note that even with these added variables in my second 
regext attempt its still using accolades around the IPv6 address.. not 
sure if all machines would use ipv6 for their localhost connection..)


Regards,
PiBa-NL (Pieter)



Re: haproxy -v doesn't show commit used when building from 2.0 repository?

2019-08-01 Thread PiBa-NL

Hi Willy,

Op 1-8-2019 om 6:21 schreef Willy Tarreau:

Hi Pieter,

On Wed, Jul 31, 2019 at 10:56:54PM +0200, PiBa-NL wrote:

Hi List,

I have build haproxy 2.0.3-0ff395c from sources however after running a
'haproxy -v' it shows up as: 'HA-Proxy version 2.0.3 2019/07/23 -
https://haproxy.org/' this isn't really correct imho as its a version based
on code committed on date 7/30. And i kinda expected the commit-id to be
part of the version shown?

I know what's happening, I always forget to do it with each new major
release. We're using Git attributes to automatically patch files
"SUBVERS" and "VERDATE" when creating the archive :

$ cat info/attributes
SUBVERS export-subst
VERDATE export-subst

And this is something I forget to re-create with each new repository,
I've fixed it now. It will be OK with new snapshots starting tomorrow.

Thanks!
Willy


Works for me, building latest commit in 2.0 repository now haproxy -v 
shows: "HA-Proxy version 2.0.3-7343c71 2019/08/01 - 
https://haproxy.org/"; for me. Thanks for your quick fix&reply :).


Regards,
PiBa-NL (Pieter)



haproxy -v doesn't show commit used when building from 2.0 repository?

2019-07-31 Thread PiBa-NL

Hi List,

I have build haproxy 2.0.3-0ff395c from sources however after running a 
'haproxy -v' it shows up as: 'HA-Proxy version 2.0.3 2019/07/23 - 
https://haproxy.org/' this isn't really correct imho as its a version 
based on code committed on date 7/30. And i kinda expected the commit-id 
to be part of the version shown?


Did i do something wrong? I thought the commit should automatically 
become part of the version. Though its very well possible ive broken the 
local freebsd makefile im using.. When building from master repository 
it seems to work fine though. If its caused by the contents of the 
repository, can it be changed? I find it really useful to see which 
commit a certain compiled haproxy binary was based upon. Thanks in 
advance .


Regards,
PiBa-NL (Pieter)




haproxy -v doesn't show commit used when building from 2.0 repository?

2019-07-31 Thread PiBa-NL

Hi List,

I have build haproxy 2.0.3-0ff395c from sources however after running a 
'haproxy -v' it shows up as: 'HA-Proxy version 2.0.3 2019/07/23 - 
https://haproxy.org/' this isn't really correct imho as its a version 
based on code committed on date 7/30. And i kinda expected the commit-id 
to be part of the version shown?


Did i do something wrong? I thought the commit should automatically 
become part of the version. Though its very well possible ive broken the 
local freebsd makefile im using.. When building from master repository 
it seems to work fine though. If its caused by the contents of the 
repository, can it be changed? I find it really useful to see which 
commit a certain compiled haproxy binary was based upon. Thanks in 
advance :).


Regards,
PiBa-NL (Pieter)




Re: slow healthchecks after dev6+ with added commit "6ec902a MINOR: threads: serialize threads initialization"

2019-06-11 Thread PiBa-NL

Hi Willy,

Op 11-6-2019 om 11:37 schreef Willy Tarreau:

On Tue, Jun 11, 2019 at 09:06:46AM +0200, Willy Tarreau wrote:

I'd like you to give it a try in your environment to confirm whether or
not it does improve things. If so, I'll clean it up and merge it. I'm
also interested in any reproducer you could have, given that the made up
test case I did above doesn't even show anything alarming.

No need to waste your time anymore, I now found how to reproduce it with
this config :

 global
stats socket /tmp/sock1 mode 666 level admin
nbthread 64

 backend stopme
timeout server  1s
option tcp-check
tcp-check send "debug dev exit\n"
server cli unix@/tmp/sock1 check

The I run it in loops bound to different CPU counts :

$ time for i in {1..20}; do
 taskset -c 0,1,2,3 ./haproxy -db -f slow-init.cfg >/dev/null 2>&1
  done

With a single CPU, it can take up to 10 seconds to run the loop on
commits e186161 and e4d7c9d while it takes 0.18 second with the patch.

With 4 CPUs like above, it takes 1.5s with e186161, 2.3s with e4d7c9d
and 0.16 second with the patch.

The tests I had run consisted in starting hundreds of thousands of
listeners to amplify the impact of the start time, but in the end
it was diluting the extra time in an already very long time. Running
it in loops like above is quite close to what regtests do and explains
why I couldn't spot the difference (e.g. a few hundreds of ms at worst
among tens of seconds).

Thus I'm merging the patch now (cleaned up already and tested as well
without threads).

Let's hope it's the last time :-)

Thanks,
Willy


Seems i kept you busy for another day.. But the result is there, it 
looks 100% fixed to me :).


Running without nbthread, and as such using 16 threads of the VM i'm 
using, i now get this:

2.0-dev7-ca3551f 2019/06/11
**   h1    0.732 WAIT4 pid=80796 status=0x0002 (user 0.055515 sys 0.039653)
**   h2    0.846 WAIT4 pid=80799 status=0x0002 (user 0.039039 sys 0.039039)
#    top  TEST ./test/tls_health_checks-org.vtc passed (0.848)

Also with repeating the testcase 1000 times while running 10 of them in 
parallel only 1 of them failed with a timeout:
     S1    0.280 syslog|<134>Jun 11 22:24:48 haproxy[88306]: 
::1:63856 [11/Jun/2019:22:24:48.074] fe1/1: Timeout during SSL handshake
I think together with the really short timeouts in the testcase itself 
this is an excellent result.


I'm considering this one fully fixed, thanks again.

Regards,
PiBa-NL (Pieter)




Re: slow healthchecks after dev6+ with added commit "6ec902a MINOR: threads: serialize threads initialization"

2019-06-10 Thread PiBa-NL

Hi Willy,

Op 10-6-2019 om 16:14 schreef Willy Tarreau:

Hi Pieter,

On Mon, Jun 10, 2019 at 04:06:13PM +0200, PiBa-NL wrote:

Things certainly look better again now regarding this issue.

Ah cool!


Running the test repeatedly, and manually looking over the results its
pretty much as good as it was before. There seems to be a 1 ms increase in
the check-duration, but maybe this is because of the moved initialization
which on startup delays the first test a millisecond or something?

It should not. At this point I think it can be anything including measurement
noise or even thread assigment on startup!


Below some test results that are based on manual observation and some in my
head filtering of the console output.. (mistakes included ;) )
repeat 10 ./vt -v ./work/haproxy-*/reg-tests/checks/tls_health_checks.vtc |
grep Layer7 | grep OK | grep WARNING
Commit-ID , min-max time for +-95% check durations , comment
e4d7c9d , 6 - 9 ms ,     all tests pass  (1 tests out of +- a hundred showed
29ms , none below 6ms and almost half of them show 7ms)

Great!


6ec902a , 11 - 150 ms ,  of the 12 tests that passed

That's quite a difference indeed.


e186161 , 5 - 8 ms ,     all tests pass  (1 test used 15 ms, more than half
the tests show 5ms check duration the majority of the remainder show 6ms)

OK!


I'm not sure if this deserves further investigation at the moment, i think
it does not. Thanks for spending your weekend on this :) that wasn't my
intention.

Oh don't worry, you know I'm a low-level guy, just give me a problem to solve
with a few bits available only and I can spend countless hours on it! Others
entertain themselves playing games, for me this is a game :-)

Thanks a lot for testing, at least we know there isn't another strange
thing hidden behind.

Cheers,
Willy


After a bit more fiddling i noticed that the new startup method seems 
more CPU intensive.
Also it can be seen the vtest does take a bit longer to pass 1.3sec v.s. 
0.8sec even though the health-check durations themselves are short as 
expected. Also its using quite a bit more 'user' cpu. I was wondering if 
this is a consequence of the new init sequence, or perhaps some 
improvement is still needed there.? I noticed this after trying to run 
multiple tests simultaneously again they interfered more with each-other 
then they used to..


2.0-dev6-e4d7c9d 2019/06/10
**   h1    1.279 WAIT4 pid=63820 status=0x0002 (user 4.484293 sys 0.054781)
**   h2    1.394 WAIT4 pid=63823 status=0x0002 (user 4.637692 sys 0.015588)
#    top  TEST ./test/tls_health_checks-org.vtc passed (1.395)

Before the latest changes it used less 'user':
2.0-dev6-e186161 2019/06/07
**   h1    0.783 WAIT4 pid=65811 status=0x0002 (user 1.077052 sys 0.031218)
**   h2    0.897 WAIT4 pid=65814 status=0x0002 (user 0.341360 sys 0.037928)
#    top  TEST ./test/tls_health_checks-org.vtc passed (0.899)


And with 'nbthread 1' the user cpu usage is even more dramatically lower 
with the same test..

2.0-dev6-e4d7c9d 2019/06/10
**   h1    0.684 WAIT4 pid=67990 status=0x0002 (user 0.015203 sys 0.015203)
**   h2    0.791 WAIT4 pid=67993 status=0x0002 (user 0.013551 sys 0.009034)
#    top  TEST ./test/tls_health_checks-org.vtc passed (0.793)

2.0-dev6-e186161 2019/06/07
**   h1    0.682 WAIT4 pid=65854 status=0x0002 (user 0.007158 sys 0.021474)
**   h2    0.790 WAIT4 pid=65857 status=0x0002 (user 0.007180 sys 0.014361)

If a single threaded haproxy process can run with 0.015 user-cpu-usage, 
i would not have expected it to required 4.4 on a 16 core cpu for the 
same startup&actions. Where it should be easier to spawn a second thread 
with the already parsed config instead of more expensive.?. Even if it 
parses the config once in each thread separately it doesn't make sense 
to me.



So i thought also to try with 'nbthread 8' and that still seems to be 
'alright' as seen below.. so i guess with the default of nbthread 16 the 
h1 and h2 get into some conflict fighting over the available cores.??. 
And haproxy by default will use all cores since 2.0-dev3 so i guess it 
might cause some undesirable effects in the field once it gets released 
and isn't the only process running on a machine, and even if it is the 
only intensive process, i wonder what other VM's might think about it on 
the same hypervisor, though i know VM's always give 'virtual 
performance' ;) ..


Running with nbthread 8, still relatively low user usage & test time:
2.0-dev6-e4d7c9d 2019/06/10
**   h1    0.713 WAIT4 pid=68467 status=0x0002 (user 0.197443 sys 0.022781)
**   h2    0.824 WAIT4 pid=68470 status=0x0002 (user 0.184567 sys 0.026366)
#    top  TEST ./test/tls_health_checks-org.vtc passed (0.825)

Hope you can make sense of some of this. Sorry for not noticing earlier, 
i guess i was to focused at only the health-check-duration. Or maybe its 
just me interpreting the numbers wrongly, that's surely also an option.


Regards,
PiBa-NL (Pieter)




Re: slow healthchecks after dev6+ with added commit "6ec902a MINOR: threads: serialize threads initialization"

2019-06-10 Thread PiBa-NL

Hi Willy,

Op 10-6-2019 om 11:09 schreef Willy Tarreau:

Hi Pieter,

On Sat, Jun 08, 2019 at 06:07:09AM +0200, Willy Tarreau wrote:

Hi Pieter,

On Fri, Jun 07, 2019 at 11:32:18PM +0200, PiBa-NL wrote:

Hi Willy,

After the commit "6ec902a MINOR: threads: serialize threads initialization"
however i have failing / slow health checks in the tls_health_checks.vtc
test. Before that the Layer7-OK takes 5ms after this commit the healthcheck
takes up to 75ms or even more.. It causes the 20ms connect/server timeouts
to also to fail the test fairly often but not always..

This is very strange, as the modification only involves threads startup.
Hmmm actually I'm starting to think about a possibility I need to verify.
I suspect it may happen that a thread manages to finish its initialization
before others request synchronzation, thus belives it's alone and starts.
I'm going to have a deeper look at this problem with this in mind. I didn't
notice the failed check here but I'll hammer it a bit more.

Sorry for the long silence, it was harder than I thought. So I never managed
to reproduce this typical issue, even by adding random delays here and there,
but I managed to see that some threads were starting the event loop before
others were done initializing, which will obviously result in issues such as
missed events that could result in what you observed.

I initially thought I could easily add a synchronization step using the
current two bit fields (and spent my whole week-end writing parallel
algorithms and revisiting all our locking mechanism just because of this).
After numerous failed attempts, I later figured that I needed to represent
more than 4 states per thread and that 2 bits are not enough. Bah... at
least I had fun time... Thus I added a new field and a simple function to
allow the code to start in synchronous steps. We now initialize one thread
at a time, then once they are all initialized we enable the listeners, and
once they are enabled, we start the pollers in all threads. It is pretty
obvious from the traces that it now does the right thing. However since I
couldn't reproduce the health check issue you were facing, I'm interested
in knowing if it's still present with the latest master, as it could also
uncover another issue.

Thanks!
Willy


Things certainly look better again now regarding this issue.

Running the test repeatedly, and manually looking over the results its 
pretty much as good as it was before. There seems to be a 1 ms increase 
in the check-duration, but maybe this is because of the moved 
initialization which on startup delays the first test a millisecond or 
something?


Below some test results that are based on manual observation and some in 
my head filtering of the console output.. (mistakes included ;) )
repeat 10 ./vt -v 
./work/haproxy-*/reg-tests/checks/tls_health_checks.vtc | grep Layer7 | 
grep OK | grep WARNING

Commit-ID , min-max time for +-95% check durations , comment
e4d7c9d , 6 - 9 ms ,     all tests pass  (1 tests out of +- a hundred 
showed 29ms , none below 6ms and almost half of them show 7ms)

6ec902a , 11 - 150 ms ,  of the 12 tests that passed
e186161 , 5 - 8 ms ,     all tests pass  (1 test used 15 ms, more than 
half the tests show 5ms check duration the majority of the remainder 
show 6ms)


I'm not sure if this deserves further investigation at the moment, i 
think it does not. Thanks for spending your weekend on this :) that 
wasn't my intention.


Regards,
PiBa-NL (Pieter)




slow healthchecks after dev6+ with added commit "6ec902a MINOR: threads: serialize threads initialization"

2019-06-07 Thread PiBa-NL

Hi Willy,

After the commit "6ec902a MINOR: threads: serialize threads 
initialization" however i have failing / slow health checks in the 
tls_health_checks.vtc test. Before that the Layer7-OK takes 5ms after 
this commit the healthcheck takes up to 75ms or even more.. It causes 
the 20ms connect/server timeouts to also to fail the test fairly often 
but not always..


Seems like something isn't quite right there. Can you check?

Regards,

PiBa-NL (Pieter)


Log can be seen below (p.s. i added milliseconds output also to the 
vtest log.. ):
***  h2    0.299 debug|[WARNING] 157/231456 (78988) : Health check for 
server be2/srv1 succeeded, reason: Layer7 check passed, code: 200, info: 
"OK", check duration: 149ms, status: 1/1 UP.



## With HTX
*    top   0.000 TEST 
./work/haproxy-6ec902a/reg-tests/checks/tls_health_checks.vtc starting

 top   0.000 extmacro def pwd=/usr/ports/net/haproxy-devel
 top   0.000 extmacro def no-htx=
 top   0.000 extmacro def localhost=127.0.0.1
 top   0.000 extmacro def bad_backend=127.0.0.1 19775
 top   0.000 extmacro def bad_ip=192.0.2.255
 top   0.000 macro def 
testdir=/usr/ports/net/haproxy-devel/./work/haproxy-6ec902a/reg-tests/checks

 top   0.000 macro def tmpdir=/tmp/vtc.78981.41db79fd
**   top   0.000 === varnishtest "Health-check test over TLS/SSL"
*    top   0.000 VTEST Health-check test over TLS/SSL
**   top   0.000 === feature ignore_unknown_macro
**   top   0.000 === server s1 {
**   s1    0.000 Starting server
 s1    0.000 macro def s1_addr=127.0.0.1
 s1    0.000 macro def s1_port=19776
 s1    0.000 macro def s1_sock=127.0.0.1 19776
*    s1    0.000 Listen on 127.0.0.1 19776
**   top   0.001 === server s2 {
**   s2    0.001 Starting server
 s2    0.001 macro def s2_addr=127.0.0.1
 s2    0.001 macro def s2_port=19777
 s2    0.001 macro def s2_sock=127.0.0.1 19777
*    s2    0.001 Listen on 127.0.0.1 19777
**   top   0.002 === syslog S1 -level notice {
**   S1    0.002 Starting syslog server
 S1    0.002 macro def S1_addr=127.0.0.1
 S1    0.002 macro def S1_port=14641
 S1    0.002 macro def S1_sock=127.0.0.1 14641
*    S1    0.002 Bound on 127.0.0.1 14641
**   s2    0.002 Started on 127.0.0.1 19777 (1 iterations)
**   s1    0.002 Started on 127.0.0.1 19776 (1 iterations)
**   top   0.002 === haproxy h1 -conf {
**   S1    0.002 Started on 127.0.0.1 14641 (level: 5)
**   S1    0.002 === recv
 h1    0.007 macro def h1_cli_sock=::1 19778
 h1    0.007 macro def h1_cli_addr=::1
 h1    0.007 macro def h1_cli_port=19778
 h1    0.007 setenv(cli, 8)
 h1    0.007 macro def h1_fe1_sock=::1 19779
 h1    0.007 macro def h1_fe1_addr=::1
 h1    0.007 macro def h1_fe1_port=19779
 h1    0.007 setenv(fe1, 9)
 h1    0.007 macro def h1_fe2_sock=::1 19780
 h1    0.007 macro def h1_fe2_addr=::1
 h1    0.007 macro def h1_fe2_port=19780
 h1    0.007 setenv(fe2, 10)
**   h1    0.007 haproxy_start
 h1    0.007 opt_worker 0 opt_daemon 0 opt_check_mode 0
 h1    0.007 argv|exec "haproxy" -d  -f "/tmp/vtc.78981.41db79fd/h1/cfg"
 h1    0.007 conf|    global
 h1    0.007 conf|\tstats socket 
"/tmp/vtc.78981.41db79fd/h1/stats.sock" level admin mode 600

 h1    0.007 conf|    stats socket "fd@${cli}" level admin
 h1    0.007 conf|
 h1    0.007 conf|    global
 h1    0.007 conf|    tune.ssl.default-dh-param 2048
 h1    0.007 conf|
 h1    0.007 conf|    defaults
 h1    0.007 conf|    mode http
 h1    0.007 conf|    timeout client 20
 h1    0.007 conf|    timeout server 20
 h1    0.007 conf|    timeout connect 20
 h1    0.007 conf|
 h1    0.007 conf|    backend be1
 h1    0.007 conf|    server srv1 127.0.0.1:19776
 h1    0.007 conf|
 h1    0.007 conf|    backend be2
 h1    0.007 conf|    server srv2 127.0.0.1:19777
 h1    0.007 conf|
 h1    0.007 conf|    frontend fe1
 h1    0.007 conf|    option httplog
 h1    0.007 conf|    log 127.0.0.1:14641 len 2048 local0 debug err
 h1    0.007 conf|    bind "fd@${fe1}" ssl crt 
/usr/ports/net/haproxy-devel/./work/haproxy-6ec902a/reg-tests/checks/common.pem

 h1    0.007 conf|    use_backend be1
 h1    0.007 conf|
 h1    0.007 conf|    frontend fe2
 h1    0.007 conf|    option tcplog
 h1    0.007 conf|    bind "fd@${fe2}" ssl crt 
/usr/ports/net/haproxy-devel/./work/haproxy-6ec902a/reg-tests/checks/common.pem

 h1    0.007 conf|    use_backend be2
 h1    0.007 XXX 12 @637
***  h1    0.008 PID: 78985
 h1    0.008 macro def h1_pid=78985
 h1    0.008 macro def h1_name=/tmp/vtc.78981.41db79fd/h1
**   top   0.008 === syslog S2 -level notice {
**   S2    0.008 Starting syslog server
 S2    0.008 macro def S2_addr=127.0.0.1
 S2    0.008 macro def S2_port=35409
 S

Re: haproxy 2.0-dev5-a689c3d - A bogus STREAM [0x805547500] is spinning at 100000 calls per second and refuses to die, aborting now!

2019-06-07 Thread PiBa-NL

Hi Willy,

Op 7-6-2019 om 9:03 schreef Willy Tarreau:

Hi again Pieter,

On Tue, Jun 04, 2019 at 04:59:06PM +0200, Willy Tarreau wrote:

Whatever the values, no single stream should be woken up 100k times per
second or it definitely indicates a bug (spinning loop that leads to reports
of 100% CPU)!

I'll see if I can get something out of this.

So just for the record, this is expected to be fixed in dev6 (it's the
major change there). I'm interested in your feedback on this one, of
course!

Willy


The stream does not spin anymore with dev6 so that seems to work 
alright. Thanks.


Regards,
PiBa-NL (Pieter)




Re: 2.0-dev5-ea8dd94 - conn_fd_handler() - dumps core - Program terminated with signal 11, Segmentation fault.

2019-06-06 Thread PiBa-NL

Hi Olivier,

Op 6-6-2019 om 18:20 schreef Olivier Houchard:

Hi Pieter,

On Wed, Jun 05, 2019 at 09:00:22PM +0200, PiBa-NL wrote:

Hi Olivier,

It seems this commit ea8dd94  broke something for my FreeBSD11 system.
Before that commit (almost) all vtest's succeed. After it several cause
core-dumps. (and keep doing that including the current HEAD: 03abf2d )

Can you take a look at the issue?

Below in this mail are the following:
- gdb# bt full  of one of the crashed tests..
- summary of failed tests

Regards,
PiBa-NL (Pieter)


Indeed, there were a few issues. I know pushed enough patches so that I only
fail one reg test, which also failed before the offending commit
(reg-tests/compression/basic.vtc).
Can you confirm it's doing better for you too ?

Thanks !

Olivier


Looks better for me :).

Testing with haproxy version: 2.0-dev5-7b3a79f
0 tests failed, 0 tests skipped, 36 tests passed

This includes the /compression/basic.vtc for me.

p.s.
This result doesn't "always" happen. But at least it seems 'just as 
good' as before ea8dd94. For example i still see this on my tests:

1 tests failed, 0 tests skipped, 35 tests passed
## Gathering results ##
## Test case: 
./work/haproxy-7b3a79f/reg-tests/http-rules/converters_ipmask_concat_strcmp_field_word.vtc 
##
## test results in: 
"/tmp/haregtests-2019-06-06_20-22-18.5Z9PR6/vtc.4579.056b0e93"

 c1    0.3 EXPECT resp.status (504) == "200" failed
But then another test run of the same binary again says '36 passed'.. so 
it seems some tests are rather timing sensitive, or maybe a other 
variable doesn't play nice.. Anyhow the core-dump as reported is fixed. 
Ill try and find why the testresults are a bit inconsistent when running 
them repeatedly.. Anyhow ill send a new mail for that if i find 
something conclusive :).


Thanks,
PiBa-NL (Pieter)




2.0-dev5-ea8dd94 - conn_fd_handler() - dumps core - Program terminated with signal 11, Segmentation fault.

2019-06-05 Thread PiBa-NL

Hi Olivier,

It seems this commit ea8dd94  broke something for my FreeBSD11 system. 
Before that commit (almost) all vtest's succeed. After it several cause 
core-dumps. (and keep doing that including the current HEAD: 03abf2d )


Can you take a look at the issue?

Below in this mail are the following:
- gdb# bt full  of one of the crashed tests..
- summary of failed tests

Regards,
PiBa-NL (Pieter)

gdb --core 
/tmp/haregtests-2019-06-05_20-40-20.7ZSvbo/vtc.65353.510907b0/h1/haproxy.core 
./work/haproxy-ea8dd94/haproxy

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...
Core was generated by 
`/usr/ports/net/haproxy-devel/work/haproxy-ea8dd94/haproxy -d -f 
/tmp/haregtests-'.

Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libcrypt.so.5...done.
Loaded symbols for /lib/libcrypt.so.5
Reading symbols from /lib/libz.so.6...done.
Loaded symbols for /lib/libz.so.6
Reading symbols from /lib/libthr.so.3...done.
Loaded symbols for /lib/libthr.so.3
Reading symbols from /usr/lib/libssl.so.8...done.
Loaded symbols for /usr/lib/libssl.so.8
Reading symbols from /lib/libcrypto.so.8...done.
Loaded symbols for /lib/libcrypto.so.8
Reading symbols from /usr/local/lib/liblua-5.3.so...done.
Loaded symbols for /usr/local/lib/liblua-5.3.so
Reading symbols from /lib/libm.so.5...done.
Loaded symbols for /lib/libm.so.5
Reading symbols from /lib/libc.so.7...done.
Loaded symbols for /lib/libc.so.7
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  conn_fd_handler (fd=55) at src/connection.c:201
201 conn->mux->wake && conn->mux->wake(conn) < 0)
(gdb) bt full
#0  conn_fd_handler (fd=55) at src/connection.c:201
    conn = (struct connection *) 0x8027e7000
    flags = 0
    io_available = 1
#1  0x005fe3f2 in fdlist_process_cached_events (fdlist=0xa66ac0) 
at src/fd.c:452

    fd = 55
    old_fd = 55
    e = 39
    locked = 0
#2  0x005fdefc in fd_process_cached_events () at src/fd.c:470
No locals.
#3  0x0051c15d in run_poll_loop () at src/haproxy.c:2553
    next = 0
    wake = 0
#4  0x00519ba7 in run_thread_poll_loop (data=0x0) at 
src/haproxy.c:2607

    ptaf = (struct per_thread_alloc_fct *) 0x94f158
    ptif = (struct per_thread_init_fct *) 0x94f168
    ptdf = (struct per_thread_deinit_fct *) 0x7fffe640
    ptff = (struct per_thread_free_fct *) 0x610596
#5  0x0051620d in main (argc=4, argv=0x7fffe648) at 
src/haproxy.c:3286

    blocked_sig = {__bits = 0x7fffe398}
    old_sig = {__bits = 0x7fffe388}
    i = 16
    err = 0
    retry = 200
    limit = {rlim_cur = 234908, rlim_max = 234909}
    errmsg = 0x7fffe550 ""
    pidfd = -1
Current language:  auto; currently minimal


## Starting vtest ##
Testing with haproxy version: 2.0-dev5-ea8dd94
#    top  TEST reg-tests/lua/txn_get_priv.vtc FAILED (0.308) exit=2
#    top  TEST reg-tests/ssl/wrong_ctx_storage.vtc FAILED (0.308) exit=2
#    top  TEST reg-tests/compression/lua_validation.vtc FAILED (0.433) 
exit=2

#    top  TEST reg-tests/checks/tls_health_checks.vtc TIMED OUT (kill -9)
#    top  TEST reg-tests/checks/tls_health_checks.vtc FAILED (20.153) 
signal=9
#    top  TEST reg-tests/peers/tls_basic_sync_wo_stkt_backend.vtc TIMED 
OUT (kill -9)
#    top  TEST reg-tests/peers/tls_basic_sync_wo_stkt_backend.vtc FAILED 
(20.137) signal=9

#    top  TEST reg-tests/peers/tls_basic_sync.vtc TIMED OUT (kill -9)
#    top  TEST reg-tests/peers/tls_basic_sync.vtc FAILED (20.095) signal=9
#    top  TEST reg-tests/connection/proxy_protocol_random_fail.vtc TIMED 
OUT (kill -9)
#    top  TEST reg-tests/connection/proxy_protocol_random_fail.vtc 
FAILED (20.195) signal=9

7 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: reg-tests/compression/lua_validation.vtc ##
## test results in: 
"/tmp/haregtests-2019-06-05_20-40-20.7ZSvbo/vtc.65353.510907b0"

 top   0.4 shell_exit not as expected: got 0x0001 wanted 0x
 h1    0.4 Bad exit status: 0x008b exit 0x0 signal 11 core 128
## Test case: reg-tests/peers/tls_basic_sync.vtc ##
## test results in: 
"/tmp/haregtests-2019-06-05_20-40-20.7ZSvbo/vtc.65353.5e9ca234"


## Test case: reg-tests/connection/proxy_protocol_random_fail.vtc ##
## test results in: 
"/tmp/haregtests-2019-06-05_20-40-20.7ZSvbo/vtc.65353.19403d36"

Re: How to allow Client Requests at a given rate

2019-04-24 Thread PiBa-NL
choServer.cpp:117]
> current rate : 2488

It looks like me its +- exactly the configured 10 requests that got 
allowed above in that minute summing up the rate numbers listed above.


>>> until almost 60 no http request are received to back ends >>
this time gap varies with every run ...
>>> after 60 secs rate limits are applied properly >>>>
E0422 11:00:07.690192 18653 EchoServer.cpp:117]
> current rate : 1
E0422 11:00:10.411736 18653 EchoServer.cpp:117]
> current rate : 1
E0422 11:00:11.412317 18653 EchoServer.cpp:117]
> current rate : 1679
E0422 11:00:12.412369 18653 EchoServer.cpp:117]
> current rate : 1667
E0422 11:00:13.451706 18653 EchoServer.cpp:117]
> current rate : 1668
E0422 11:00:14.453778 18653 EchoServer.cpp:117]
> current rate : 1668
E0422 11:00:15.457597 18653 EchoServer.cpp:117]
> current rate : 1645
E0422 11:00:16.458938 18653 EchoServer.cpp:117]
> current rate : 1762
E0422 11:00:17.470010 18653 EchoServer.cpp:117]
> current rate : 1598


Can I get some info on the issue, is this know issue or am I
missing some config for rate limiting to be applied properly ?

Thanks in advance,
  Badari

I wonder if instead of allowing 10 requests per minute you would 
like 1666 requests to be allowed per second.? Which should effectively 
be similar besides that 'bursts' of requests will be blocked sooner.. To 
do this use 1s instead of 1m for the 'http_req_rate(1m)'. and put the 
1666 as a limit in the map file...


Still you might see a burst of 1000 requests in the first millisecond, 
and only 666 allowed in the other 999 milliseconds (theoretically.?.). 
But also its probably not really relevant on which ms a request is 
allowed or blocked. you could argue that allowing 2 requests per 
millisecond would achieve almost the desired benchmark result. But then 
if there is nothing to do, and a few 10 users send a request at the same 
millisecond you might block 8... while the server has actually little to 
do... and though managing this on a millisecond level is likely 
ridiculous its just to make it a bit more clear that a short 'burst' of 
requests isn't necessarily bad and that requests arn't always expected 
to come in at all the same speed.. So depending on expected runtime of a 
request and when the server will start to have trouble the current 
10/minute might be perfectly fine.. or make it a 1 per 10 seconds.?.


So to sum things up.. the limiting is working, and its allowing 10 
request in the first minute, just as specified. So in that regard its 
working correctly already..


Regards,

PiBa-NL (Pieter)




Re: DNS Resolver Issues

2019-03-23 Thread PiBa-NL

Hi Daniel, Baptiste,

@Daniel, can you remove the 'addr loadbalancer-internal.xxx.yyy' from 
the server check? It seems to me that that name is not being resolved by 
the 'resolvers'. And even if it would it would be kinda redundant as it 
is in the example as it is the same as the servername.?. Not sure how 
far below scenarios are all explained by this though..


@Baptiste, is it intentional that a wrong 'addr' dns name makes haproxy 
fail to start despite having the supposedly never failing 
'default-server init-addr last,libc,none' ? Is it possibly a good 
feature request to support re-resolving a dns name for the addr setting 
as well ?


Regards,
PiBa-NL (Pieter)

Op 21-3-2019 om 20:37 schreef Daniel Schneller:

Hi!

Thanks for the response. I had looked at the "hold" directives, but since they 
all seem to have reasonable defaults, I did not touch them.
I specified 10s explictly, but it did not make a difference.

I did some more tests, however, and it seems to have more to do with the number 
of responses for the initial(?) DNS queries.
Hopefully these three tables make sense and don't get mangled in the mail. The 
"templated"
proxy is defined via "server-template" with 3 "slots". The "regular" one just as 
"server".


Test 1: Start out  with both "valid" and "broken" DNS entries. Then comment 
out/add back
one at a time as described in (1)-(5).
Each time after changing /etc/hosts, restart dnsmasq and check haproxy via 
hatop.
Haproxy started fresh once dnsmasq was set up to (1).

|  state   state
 /etc/hosts |  regular templated
|-
(1) BRK|  UP/L7OK DOWN/L4TOUT
 VALID  |  MAINT/resolution
|  UP/L7OK
|

(2) BRK|  DOWN/L4TOUT DOWN/L4TOUT
 #VALID |  MAINT/resolution
|  MAINT/resolution
|
(3) #BRK   |  UP/L7OK UP/L7OK
 VALID  |  MAINT/resolution
|  MAINT/resolution
|
(4) BRK|  UP/L7OK UP/L7OK
 VALID  |  DOWN/L4TOUT
|  MAINT/resolution
|
(5) BRK|  DOWN/L4TOUT DOWN/L4TOUT
 #VALID |  MAINT/resolution
|  MAINT/resolution
   
This all looks normal and as expected. As soon as the "VALID" DNS entry is present, the

UP state follows within a few seconds.
   



Test 2: Start out "valid only" (1) and proceed as described in (2)-(5), again 
restarting
dnsmasq each time, and haproxy reloaded after dnsmasq was set up to (1).

|  state   state
 /etc/hosts |  regular templated
|
(1) #BRK   |  UP/L7OK MAINT/resolution
 VALID  |  MAINT/resolution
|  UP/L7OK
|
(2) BRK|  UP/L7OK DOWN/L4TOUT
 VALID  |  MAINT/resolution
|  UP/L7OK
|
(3) #BRK   |  UP/L7OK MAINT/resolution
 VALID  |  MAINT/resolution
|  UP/L7OK
|
(4) BRK|  UP/L7OK DOWN/L4TOUT
 VALID  |  MAINT/resolution
|  UP/L7OK
|
(5) BRK|  DOWN/L4TOUT DOWN/L4TOUT
 #VALID |  MAINT/resolution
|  MAINT/resolution
   
Everything good here, too. Adding the broken DNS entry does not bring the proxies down

until only the broken one is left.



Test 3: Start out "broken only" (1).
Again, same as before, haproxy restarted once dnsmasq was initialized to (1).

|  state   state
 /etc/hosts |  regular templated
|
(1) BRK   

Re: haproxy reverse proxy to https streaming backend

2019-03-15 Thread PiBa-NL

Hi Thomas,

Op 15-3-2019 om 15:24 schreef Thomas Schmiedl:

Hello Pieter,

thanks for your help, it works well now. The regex solution was my only
idea, because I'm not a developer. I know, the haproxy workaround isn't
the best solution, but nobody would fix the xupnpd2 hls-handling.

Maybe you could help me again. I see, that the playlist has 2 "states"
("header" tags).

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:50
#EXT-X-DISCONTINUITY

and

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:
#EXT-X-TARGETDURATION:2

The result from the lua-script (header tags) should always be:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:
#EXT-X-TARGETDURATION:2


Something like this might do the trick? Just a 'fixed' header as the 
result with only the 'found' media sequence number inserted ?:


local data = [=[
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:01234
#EXT-X-TARGETDURATION:50
#EXT-X-DISCONTINUITY
]=]

local mediaseq_dummy,mediasequence,mediaseq_eol = string.match(data, 
"(#EXT[-]X[-]MEDIA[-]SEQUENCE:)(%d+)(\n)")


local result = [=[
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:]=]..mediasequence..[=[

#EXT-X-TARGETDURATION:2
]=]

print("Result:\n"..result)

Result:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:01234
#EXT-X-TARGETDURATION:2




Thanks,
Thomas

Am 15.03.2019 um 00:27 schrieb PiBa-NL:

Hi Thomas,

Op 14-3-2019 om 20:28 schreef Thomas Schmiedl:

Hello,

I never got a reply from the original author of xupnpd2 to fix the
hls-handling, so I created a lua-script (thanks to Thierry Fournier),
but it's too slow for the router cpu. Could someone rewrite the script
to a lua-c-module?


I don't think making this exact code a lua-c-module would solve the
issue, lua is not a 'slow' language. But I do wonder if regex is the
right tool for data manipulation..

Regards,
Thomas

test.cfg:
global
    lua-load /var/media/ftp/playlist.lua

frontend main
    mode http
    bind *:8080
    acl is_index_m3u8 path -m end /index.m3u8
    http-request use-service lua.playlist if is_index_m3u8
    default_backend forward

backend forward
    mode http
    server gjirafa puma.gjirafa.com:443 ssl verify none

playlist.lua:
core.register_service("playlist", "http", function(applet)
    local tcp = core.tcp()
    tcp:connect_ssl("51.75.52.73", 443)
    tcp:send("GET ".. applet.path .." HTTP/1.1\r\nConnection:
Close\r\nHost: puma.gjirafa.com\r\n\r\n")
    local body = tcp:receive("*a")

    local result = string.match(body,"^.*(#EXTM3U.-)#EXTINF")
    result = result ..
string.match(body,"(...%d+.ts%d+.ts%d+.ts)[\r\n|0]*$") 





I think a 'easier' regex might already improve performance, can you try
this one for example ?:
 result = result ..
string.match(body,"(#EXTINF:%d+[/.]%d+,\n%d+[/.]ts.#EXTINF:%d[/.]%d%d%d,.%d+[/.]ts.#EXTINF:%d+[/.]%d+,\n%d+[/.]ts)[\r\n|0]*$") 




With my test using 'https://rextester.com/l/lua_online_compiler' and a
little sample m3u8 it seemed to work faster anyhow.



    applet:set_status(200)
    applet:add_header("Content-Type", "application/x-mpegURL")
    applet:add_header("content-length", string.len(result))
    applet:add_header("Connection", "close")
    applet:start_response()
    applet:send(result)
end)

Am 19.02.2019 um 21:31 schrieb Thomas Schmiedl:

Am 19.02.2019 um 05:29 schrieb Willy Tarreau:

Hello Thomas,

On Sun, Feb 17, 2019 at 05:55:29PM +0100, Thomas Schmiedl wrote:

Hello Bruno,

I think the problem is the parsing of the .m3u8-playlist in
xupnpd2. The
first entry to the .ts-file is 4 hours behind the actual time. But I
have no c++ experience to change the code.


For me if it works but not correctly like this, it clearly indicates
there is a (possibly minor) incompatibility between the client and 
the

server. It just happens that if your client doesn't support https, it
was never tested against this server and very likely needs to be
adapted
to work correctly.


Is it possible in haproxy to manipulate the playlist file (server
response), that only the last .ts-entries will be available and
returned
to xupnpd2?


No, haproxy doesn't manipulate contents. Not only it's completely
out of
the scope of a load balancing proxy, but it would also encourage some
users to try to work around some of their deployment issues in the
ugliest
possible way, causing even more trouble (and frankly, on *every*
infrastructure where you find such horrible tricks deployed, the 
admins

implore you to help them because they're in big trouble and are stuck
with
no option left to fix the issues they've created).

If it's only a matter of modifying one file on the 

Re: haproxy reverse proxy to https streaming backend

2019-03-14 Thread PiBa-NL

Hi Thomas,

Op 14-3-2019 om 20:28 schreef Thomas Schmiedl:

Hello,

I never got a reply from the original author of xupnpd2 to fix the
hls-handling, so I created a lua-script (thanks to Thierry Fournier),
but it's too slow for the router cpu. Could someone rewrite the script
to a lua-c-module?

I don't think making this exact code a lua-c-module would solve the 
issue, lua is not a 'slow' language. But I do wonder if regex is the 
right tool for data manipulation..

Regards,
Thomas

test.cfg:
global
    lua-load /var/media/ftp/playlist.lua

frontend main
    mode http
    bind *:8080
    acl is_index_m3u8 path -m end /index.m3u8
    http-request use-service lua.playlist if is_index_m3u8
    default_backend forward

backend forward
    mode http
    server gjirafa puma.gjirafa.com:443 ssl verify none

playlist.lua:
core.register_service("playlist", "http", function(applet)
    local tcp = core.tcp()
    tcp:connect_ssl("51.75.52.73", 443)
    tcp:send("GET ".. applet.path .." HTTP/1.1\r\nConnection:
Close\r\nHost: puma.gjirafa.com\r\n\r\n")
    local body = tcp:receive("*a")

    local result = string.match(body,"^.*(#EXTM3U.-)#EXTINF")
    result = result ..
string.match(body,"(...%d+.ts%d+.ts%d+.ts)[\r\n|0]*$") 



I think a 'easier' regex might already improve performance, can you try 
this one for example ?:
    result = result .. 
string.match(body,"(#EXTINF:%d+[/.]%d+,\n%d+[/.]ts.#EXTINF:%d[/.]%d%d%d,.%d+[/.]ts.#EXTINF:%d+[/.]%d+,\n%d+[/.]ts)[\r\n|0]*$")


With my test using 'https://rextester.com/l/lua_online_compiler' and a 
little sample m3u8 it seemed to work faster anyhow.




    applet:set_status(200)
    applet:add_header("Content-Type", "application/x-mpegURL")
    applet:add_header("content-length", string.len(result))
    applet:add_header("Connection", "close")
    applet:start_response()
    applet:send(result)
end)

Am 19.02.2019 um 21:31 schrieb Thomas Schmiedl:

Am 19.02.2019 um 05:29 schrieb Willy Tarreau:

Hello Thomas,

On Sun, Feb 17, 2019 at 05:55:29PM +0100, Thomas Schmiedl wrote:

Hello Bruno,

I think the problem is the parsing of the .m3u8-playlist in 
xupnpd2. The

first entry to the .ts-file is 4 hours behind the actual time. But I
have no c++ experience to change the code.


For me if it works but not correctly like this, it clearly indicates
there is a (possibly minor) incompatibility between the client and the
server. It just happens that if your client doesn't support https, it
was never tested against this server and very likely needs to be 
adapted

to work correctly.


Is it possible in haproxy to manipulate the playlist file (server
response), that only the last .ts-entries will be available and 
returned

to xupnpd2?


No, haproxy doesn't manipulate contents. Not only it's completely 
out of

the scope of a load balancing proxy, but it would also encourage some
users to try to work around some of their deployment issues in the
ugliest
possible way, causing even more trouble (and frankly, on *every*
infrastructure where you find such horrible tricks deployed, the admins
implore you to help them because they're in big trouble and are stuck
with
no option left to fix the issues they've created).

If it's only a matter of modifying one file on the fly, you may manage
to do it using Lua : instead of forwarding the request to the server,
you send it to a Lua function, which itself makes the request to the
server, buffers the response, rewrites it, then sends it back to the
client. You must just make sure to only send there the requests for
the playlist file and nothing else.

Could someone send me such a lua-script example and how to include in
haproxy. Thanks


I personally think this is ugly compared to trying to fix the faulty
client. Maybe you can report your issue to the author(s) and share your
config to help them reproduce it ?

Regards,
Willy


Regards,
PiBa-NL (Pieter)




regtest, response lenght check failure for /reg-tests/http-capture/h00000.vtc with HTX enabled, using 2.0-dev1

2019-02-26 Thread PiBa-NL

Hi List, Christopher,

With 2.0-dev1-6c1b667 and 2.0-dev1-12a7184 i get the 'failure' below 
when running reg-tests with HTX enabled. (without HTX the test passes)


Seems this commit made it return different results:
http://git.haproxy.org/?p=haproxy.git;a=commit;h=b8d2ee040aa21f2906a4921e5e1c7afefb7e

I 'think' the syslog output for a single request/response should remain 
the same with/without htx? Or should the size check be less strict or 
accept 1 of 2 possible outcomes with/without htx.?


Regards,
PiBa-NL (Pieter)

 S 0.0 syslog|<134>Feb 26 20:42:52 haproxy[56313]: ::1:46091 
[26/Feb/2019:20:42:52.065] fe be/srv 0/0/0/2/2 200 17473 - -  
1/1/0/0/0 0/0 {HPhx8n59qjjNBLjP} {htb56qDdCcbRVTfS} "GET / HTTP/1.1"
**   S 0.0 === expect ~ "[^:\\[ ]\\[${h_pid}\\]: .* .* fe be/srv .* 
200 176...
 S 0.0 EXPECT FAILED ~ "[^:\[ ]\[56313\]: .* .* fe be/srv .* 200 
17641 - -  .* .* {HPhx8n59qjjNBLjP} {htb56qDdCcbRVTfS} "GET / 
HTTP/1\.1""






Re: h1-client to h2-server host header / authority conversion failure.?

2019-02-01 Thread PiBa-NL

Hi Willy,

Op 2-2-2019 om 0:01 schreef Willy Tarreau:

On Fri, Feb 01, 2019 at 09:43:13PM +0100, PiBa-NL wrote:

The 'last' part is in TCP mode, and is intended like that to allow me to run
tcpdump/wireshark on the un-encrypted traffic, and being certain that
haproxy would not modify it before sending. But maybe the test contained a
'half done' edit as well, ill attach a new test now.

OK.


I've not tried 1.9 .. but did try with '2.0-dev0-ff5dd74 2019/01/31', that
should contain the fix as well right.?.

I just rechecked and no, these ones were added after ff5dd74 :

9c9da5e MINOR: muxes: Don't bother to LIST_DEL(&conn->list) before calling conn_
dc21ff7 MINOR: debug: Add an option that causes random allocation failures.
3c4e19f BUG/MEDIUM: backend: always release the previous connection into its own
3e45184 BUG/MEDIUM: htx: check the HTX compatibility in dynamic use-backend rule
9c4f08a BUG/MINOR: tune.fail-alloc: Don't forget to initialize ret.
1da41ec BUG/MINOR: backend: check srv_conn before dereferencing it
5be92ff BUG/MEDIUM: mux-h2: always omit :scheme and :path for the CONNECT method
053c157 BUG/MEDIUM: mux-h2: always set :authority on request output
32211a1 BUG/MEDIUM: stream: Don't forget to free s->unique_id in stream_free().

It's 053c157 which fixes it. You scared me, I thought I had messed up with
the commit :-) I tested again here and it still works for me.

Cheers,
Willy


Sorry, indeed all 4 tests pass. ( Using 2.0-dev0-32211a1 2019/02/01 )

I must have mixed the git-id to sync up with in my makefile, thought i 
picked the last one..


Sorry for the noise! Thanks for fixing and re-checking :)

Regards,

PiBa-NL (Pieter)




Re: h1-client to h2-server host header / authority conversion failure.?

2019-01-28 Thread PiBa-NL

Hi Willy, List,

Just a little check, was below mail received properly with the 6 
attachments (vtc/vtc/log/png/png/pcapng) .?

(As it didn't show up on the mail-archive.)

Regards,
PiBa-NL (Pieter)

Op 26-1-2019 om 21:04 schreef PiBa-NL:

Hi Willy,

Op 25-1-2019 om 17:04 schreef Willy Tarreau:

Hi Pieter,

On Fri, Jan 25, 2019 at 01:01:19AM +0100, PiBa-NL wrote:

Hi List,

Attached a regtest which i 'think' should pass.

**   s1    0.0 === expect tbl.dec[1].key == ":authority"
 s1    0.0 EXPECT tbl.dec[1].key (host) == ":authority" failed

It seems to me the Host <> Authority conversion isn't happening 
properly.?

But maybe i'm just making a mistake in the test case...

I was using HA-Proxy version 2.0-dev0-f7a259d 2019/01/24 with this 
test.


The test was inspired by the attempt to connect to mail google com , as
discussed in the "haproxy 1.9.2 with boringssl" mail thread.. Not 
sure if

this is the main problem, but it seems suspicious to me..
It's not as simple, :authority is only required for CONNECT and is 
optional
for other methods with Host as a fallback. Clients are encouraged to 
use it
instead of the Host header field, according to paragraph 8.1.2.3, but 
there
is nothing indicating that a gateway may nor should build one from 
scratch

when translating HTTP/1.1 to HTTP/2. In fact the authority part is
generally not present in the URIs we receive as a gateway, so what 
we'd put
there would be completely reconstructed from the host header field. I 
don't

even know if all servers are fine with authority only instead of Host.

Please note, I'm not against changing this, I just want to be sure we
actually fix something and that we don't break anything. Thus if you 
have

any info indicating there is an issue with this one missing, it could
definitely help.

Thanks!
Willy


Today ive given it another shot. (connecting to mail google com).
Is there a way in haproxy to directly 'manipulate' the h2 headers? 
Setting h2 header with set-header :authority didn't seem to work.?


See attached some logs a packetcapture and a vtc that uses google's 
servers itself.


It seems google replies "Header: :status: 400 Bad Request" But leaves 
me 'guessing' why it would be invalid, also the 'body' doesn't get 
downloaded but haproxy terminates the connection, which curl then 
reports as missing bytes.. There are a few differences between the 2 
get requests, authority and scheme.. But i also wonder if that is the 
actual packet with the issue, H2 isnt quite a simple as H1 used to be ;).


Also with "h2-client-mail google vtc" the first request succeeds, but 
the second where the Host header is used fails. I think this shows 
there is a 'need' for the :authority header to be present? Or i mixed 
something up...


p.s.
Wireshark doesnt nicely show/dissect the http2 requests made by vtest, 
probably because for example the first magic packet is spread out over 
multiple tcp packets, is there a way to make it send them in 1 go, or 
make haproxy 'buffer' the short packets into a bigger complete 
packets, i tried putting a little listen/bind/server section in the 
request path, but it just forwarded the small packets as is..


Regards,
PiBa-NL (Pieter)






h1-client to h2-server host header / authority conversion failure.?

2019-01-24 Thread PiBa-NL

Hi List,

Attached a regtest which i 'think' should pass.

**   s1    0.0 === expect tbl.dec[1].key == ":authority"
 s1    0.0 EXPECT tbl.dec[1].key (host) == ":authority" failed

It seems to me the Host <> Authority conversion isn't happening 
properly.? But maybe i'm just making a mistake in the test case...


I was using HA-Proxy version 2.0-dev0-f7a259d 2019/01/24 with this test.

The test was inspired by the attempt to connect to mail.google.com , as 
discussed in the "haproxy 1.9.2 with boringssl" mail thread.. Not sure 
if this is the main problem, but it seems suspicious to me..


Regards,

PiBa-NL (Pieter)

varnishtest "Check H1 client to H2 server with HTX."

feature ignore_unknown_macro

syslog Slog_1 -repeat 1 -level info {
recv
} -start

server s1 -repeat 2 {
  rxpri
  stream 0 {
txsettings
rxsettings
txsettings -ack
  } -run

  stream 1 {
rxreq
expect tbl.dec[1].key == ":authority"
expect tbl.dec[1].value == "domain.tld"
txresp
  } -run

} -start

haproxy h1 -conf {
global
log ${Slog_1_addr}:${Slog_1_port} len 2048 local0 debug err

defaults
mode http
timeout client 2s
timeout server 2s
timeout connect 1s
log global
option http-use-htx

frontend fe1
option httplog
bind "fd@${fe1}"
default_backend b1
backend b1
 server s1 ${s1_addr}:${s1_port} proto h2

frontend fe2
option httplog
bind "fd@${fe2}" proto h2
default_backend b2
backend b2
  server s2 ${s1_addr}:${s1_port} proto h2

} -start

client c1 -connect ${h1_fe1_sock} {
txreq -url "/" -hdr "host: domain.tld"
rxresp
expect resp.status == 200
} -run

client c2 -connect ${h1_fe2_sock} {
  txpri
  stream 0 {
txsettings -hdrtbl 0
rxsettings
  } -run
  stream 1 {
txreq -req GET -url /3 -litIdxHdr inc 1 huf "domain.tld"
rxresp
expect resp.status == 200
  } -run
} -run

#syslog Slog_1 -wait


Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread PiBa-NL

Hi Aleksandar,

Just FYI.

Op 22-1-2019 om 22:08 schreef Aleksandar Lazic:

But this could be a know bug and is fixed in the current git

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.808) exit=2
1 tests failed, 0 tests skipped, 32 tests passed
## Gathering results ##
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed


This was indeed identified as a bug, and is fixed in current master.

The impact of this was rather low though, and this specific issue of a 
few 'missing' mails under certain configuration circumstances existed 
for years before it was spotted with the regtest.


https://www.mail-archive.com/haproxy@formilux.org/msg32190.html
http://git.haproxy.org/?p=haproxy.git;a=commit;h=774c486cece942570b6a9d16afe236a16ee12079

Regards,
PiBa-NL (Pieter)




Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-21 Thread PiBa-NL

Hi Christopher,
Op 21-1-2019 om 15:28 schreef Christopher Faulet:


Hi Pieter,

About the timing issue, could you try the following patch please ? 
With it, I can run the regtest about email alerts without any error.


Thanks,
--
Christopher Faulet


The regtest works for me as well with this patch. Without needing the 
'timeout mail' setting.


I think we can call it fixed once committed.

Thanks,
PiBa-NL (Pieter)




Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-16 Thread PiBa-NL

Hi Willy, Christopher,
Op 16-1-2019 om 17:32 schreef Willy Tarreau:

On Wed, Jan 16, 2019 at 02:28:56PM +0100, Christopher Faulet wrote:

here is a new patch, again. Willy, I hope it will be good for the
release 1.9.2.

This one works :).

OK so I've mergd it now, thank you!
Willy


Op 14-1-2019 om 11:17 schreef Christopher Faulet:

If it's ok for you, I'll also merge your regtest.


Can you add the regtest as well into the git repo?

Regards,
PiBa-NL (Pieter)




Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-15 Thread PiBa-NL

Hi Christopher,

Op 15-1-2019 om 10:48 schreef Christopher Faulet:

Le 14/01/2019 à 21:53, PiBa-NL a écrit :

Hi Christopher,

Op 14-1-2019 om 11:17 schreef Christopher Faulet:

Le 12/01/2019 à 23:23, PiBa-NL a écrit :

Hi List,

I've configured haproxy with htx and when i try to filter the stats
webpage.
Sending this request: "GET /?;csv;scope=b1" to '2.0-dev0-762475e
2019/01/10' it will crash with the trace below.
1.9.0 and 1.9.1 are also affected.

Can someone take a look? Thanks in advance.

A regtest is attached that reproduces the behavior, and which i think
could be included into the haproxy repository.



Pieter,

Here is the patch that should fix this issue. This was "just" an
oversight when the stats applet has been adapted to support the HTX.

If it's ok for you, I'll also merge your regtest.

Thanks


It seems the patch did not change/fix the crash.? Below looks pretty
much the same as previously. Did i fail to apply the patch properly.? It
seems to have 'applied' properly checking a few lines of the touched
code manually. As for the regtest, yes please merge that if its okay
as-is, perhaps after the fix is also ready :).



Hi Pieter,

Sorry, I made my patch too quickly. It seemed ok, but obviously not... 
This new one should do the trick.


Well.. 'something' changed, still crashing though.. but at a different 
place.


Regards,
PiBa-NL (Pieter)

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x004d3770 in htx_sl_p2 (sl=0x0) at include/common/htx.h:237
237 return ist2(HTX_SL_P2_PTR(sl), HTX_SL_P2_LEN(sl));
(gdb) bt full
#0  0x004d3770 in htx_sl_p2 (sl=0x0) at include/common/htx.h:237
No locals.
#1  0x004d3665 in htx_sl_req_uri (sl=0x0) at 
include/common/htx.h:252

No locals.
#2  0x004d1125 in stats_scope_ptr (appctx=0x802678540, 
si=0x8026416d8) at src/stats.c:268

    req = 0x802641410
    htx = 0x80271df80
    uri = {ptr = 0x60932e  
"H\213E\320H\211E\370H\213E\370H\201\304\260", len = 4304914720}
    p = 0x4802631048 0x4802631048>
#3  0x004d8505 in stats_send_htx_redirect (si=0x8026416d8, 
htx=0x8027c8e40) at src/stats.c:3162

    scope_ptr = 0x5f80f5 <__pool_get_first+21> "H\211E\310H\203}\310"
    scope_txt = 
"\000\342\377\377\377\177\000\000\351}M\000\000\000\000\000x\024d\002\b\000\000\000x\024d\002"

    s = 0x802641400
    uri = 0x802638000
    appctx = 0x802678540
    sl = 0x8027c8e40
    flags = 8
#4  0x004d60fb in htx_stats_io_handler (appctx=0x802678540) at 
src/stats.c:3337

    si = 0x8026416d8
    s = 0x802641400
    req = 0x802641410
    res = 0x802641470
    req_htx = 0x8027c8e40
    res_htx = 0x8027c8e40
#5  0x004d2d36 in http_stats_io_handler (appctx=0x802678540) at 
src/stats.c:3393

    si = 0x8026416d8
    s = 0x802641400
    req = 0x802641410
    res = 0x802641470
#6  0x005f7d5f in task_run_applet (t=0x802656780, 
context=0x802678540, state=16385) at src/applet.c:85

    app = 0x802678540
    si = 0x8026416d8
#7  0x005f3023 in process_runnable_tasks () at src/task.c:435
    t = 0x802656780
    state = 16385
    ctx = 0x802678540
    process = 0x5f7cc0 
    t = 0x802656780
    max_processed = 200
#8  0x00516ca2 in run_poll_loop () at src/haproxy.c:2620
    next = 0
    exp = 1394283990
#9  0x005138f8 in run_thread_poll_loop (data=0x8026310e8) at 
src/haproxy.c:2685

    start_lock = 0
    ptif = 0x936d40 
    ptdf = 0x0
#10 0x0050ff26 in main (argc=4, argv=0x7fffeb08) at 
src/haproxy.c:3314

    tids = 0x8026310e8
    threads = 0x8026310f0
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 4051, rlim_max = 4051}
    errmsg = 
"\000\353\377\377\377\177\000\000\060\353\377\377\377\177\000\000\b\353\377\377\377\177\000\000\004\000\000\000\000\000\000\000\376\310\311\070\333\207d\000`9\224\000\000\000\000\000\000\353\377\377\377\177\000\000\060\353\377\377\377\177\000\000\b\353\377\377\377\177\000\000\004\000\000\000\000\000\000\000\240\352\377\377\377\177\000\000R\201\000\002\b\000\000\000\001\000\000"

    pidfd = -1




Re: stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-14 Thread PiBa-NL

Hi Christopher,

Op 14-1-2019 om 11:17 schreef Christopher Faulet:

Le 12/01/2019 à 23:23, PiBa-NL a écrit :

Hi List,

I've configured haproxy with htx and when i try to filter the stats 
webpage.

Sending this request: "GET /?;csv;scope=b1" to '2.0-dev0-762475e
2019/01/10' it will crash with the trace below.
1.9.0 and 1.9.1 are also affected.

Can someone take a look? Thanks in advance.

A regtest is attached that reproduces the behavior, and which i think
could be included into the haproxy repository.



Pieter,

Here is the patch that should fix this issue. This was "just" an 
oversight when the stats applet has been adapted to support the HTX.


If it's ok for you, I'll also merge your regtest.

Thanks


It seems the patch did not change/fix the crash.? Below looks pretty 
much the same as previously. Did i fail to apply the patch properly.? It 
seems to have 'applied' properly checking a few lines of the touched 
code manually. As for the regtest, yes please merge that if its okay 
as-is, perhaps after the fix is also ready :).


Regards,
PiBa-NL (Pieter)

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x005658e7 in strnistr (str1=0x802631048 "fe1", len_str1=3, 
str2=0x271dfcc , 
len_str2=3) at src/standard.c:3657

3657    while (toupper(*start) != toupper(*str2)) {
(gdb) bt full
#0  0x005658e7 in strnistr (str1=0x802631048 "fe1", len_str1=3, 
str2=0x271dfcc , 
len_str2=3) at src/standard.c:3657

    pptr = 0x271dfcc 
    sptr = 0x6995d3 "text/plain"
    start = 0x802631048 "fe1"
    slen = 3
    plen = 3
    tmp1 = 0
    tmp2 = 4294958728
#1  0x004d09ff in stats_dump_proxy_to_buffer (si=0x8026416d8, 
htx=0x8027c8e40, px=0x8026b3c00, uri=0x802638000) at src/stats.c:2087
    scope_ptr = 0x271dfcc 0x271dfcc>

    appctx = 0x802678380
    s = 0x802641400
    rep = 0x802641470
    sv = 0x8027c8e40
    svs = 0x343e1e0
    l = 0x4d3a8f 
    flags = 0
#2  0x004d49e9 in stats_dump_stat_to_buffer (si=0x8026416d8, 
htx=0x8027c8e40, uri=0x802638000) at src/stats.c:2664





Re: Get client IP

2019-01-13 Thread PiBa-NL

Hi,
Op 13-1-2019 om 13:11 schreef Aleksandar Lazic:

Hi.

Am 13.01.2019 um 12:17 schrieb Vũ Xuân Học:

Hi,

Please help me to solve this problem.

I use HAProxy version 1.5.18, SSL transparent mode and I can not get client IP
in my .net mvc website. With mode http, I can use option forwardfor to catch
client ip but with tcp mode, my web read X_Forwarded_For is null.

  


My diagram:

Client => Firewall => HAProxy => Web

  


I read HAProxy document, try to use send-proxy. But when use send-proxy, I can
access my web.

This is my config:

frontend test2233

     bind *:2233

     option forwardfor

  


     default_backend testecus

backend testecus

     mode http

     server web1 192.168.0.151:2233 check

Above config work, and I can get the client IP

That's good as it's `mode http` therefore haproxy can see the http traffic.

Indeed it can insert the http forwardfor header with 'mode http'.



Config with SSL:

frontend ivan

     bind 192.168.0.4:443
     mode tcp
     option tcplog

#option forwardfor

     reqadd X-Forwarded-Proto:\ https

This can't work as you use `mode tcp` and therefore haproxy can't see the http
traffic.

 From my point of view have you now 2 options.

* use https termination on haproxy. Then you can add this http header.

Thats one option indeed.

* use accept-proxy in the bind line. This option requires that the firewall is
able to send the PROXY PROTOCOL header to haproxy.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#5.1-accept-proxy


I dont expect a firewall to send such a header. And if i understand 
correctly the 'webserver' would need to be configured to accept 
proxy-protocol.
The modification to make in haproxy would be to configure 
send-proxy[-v2-ssl-cn]

http://cbonte.github.io/haproxy-dconv/1.9/snapshot/configuration.html#5.2-send-proxy
And how to configure it with for example nginx:
https://wakatime.com/blog/23-how-to-scale-ssl-with-haproxy-and-nginx



The different modes are described in the doc
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-mode

Here is a blog post about basic setup of haproxy with ssl
https://www.haproxy.com/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/


     acl tls req.ssl_hello_type 1

     tcp-request inspect-delay 5s

     tcp-request content accept if tls

  


     # Define hosts

     acl host_1 req.ssl_sni -i ebh.vn

     acl host_2 req.ssl_sni hdr_end(host) -i einvoice.com.vn

 


    use_backend eBH if host_1

    use_backend einvoice443 if host_2

  


backend eBH

     mode tcp

     balance roundrobin

     option ssl-hello-chk

    server web1 192.168.0.153:443 maxconn 3 check #cookie web1

    server web1 192.168.0.154:443 maxconn 3 check #cookie web2

  


Above config doesn’t work, and I can not get the client ip. I try server web1
192.168.0.153:443 send-proxy and try server web1 192.168.0.153:443 send-proxy-v2
but I can’t access my web.

This is expected as the Firewall does not send the PROXY PROTOCOL header and the
bind line is not configured for that.
Firewall's by themselves will never use proxy-protocol at all. That it 
doesn't work with send-proxy on the haproxy server line is likely 
because the webservice that is receiving the traffic isn't configured to 
accept the proxy protocol. How to configure a ".net mvc website" to 
accept that is something i don't know if it is even possible at all..



Many thanks,

Best regards
Aleks


Thanks & Best Regards!

* VU XUAN HOC


Regards,
PiBa-NL (Pieter)



stats webpage crash, htx and scope filter, [PATCH] REGTEST is included

2019-01-12 Thread PiBa-NL

Hi List,

I've configured haproxy with htx and when i try to filter the stats webpage.
Sending this request: "GET /?;csv;scope=b1" to '2.0-dev0-762475e 
2019/01/10' it will crash with the trace below.

1.9.0 and 1.9.1 are also affected.

Can someone take a look? Thanks in advance.

A regtest is attached that reproduces the behavior, and which i think 
could be included into the haproxy repository.


Regards,
PiBa-NL (Pieter)

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00564fe7 in strnistr (str1=0x802631048 "fe1", len_str1=3, 
str2=0x804e3bf4c , 
len_str2=3)

    at src/standard.c:3657
3657    while (toupper(*start) != toupper(*str2)) {
(gdb) bt full
#0  0x00564fe7 in strnistr (str1=0x802631048 "fe1", len_str1=3, 
str2=0x804e3bf4c , 
len_str2=3)

    at src/standard.c:3657
    pptr = 0x804e3bf4c 0x804e3bf4c>

    sptr = 0x80271df80 "\330?"
    start = 0x802631048 "fe1"
    slen = 3
    plen = 3
    tmp1 = 0
    tmp2 = 4294959392
#1  0x004d01d3 in stats_dump_proxy_to_buffer (si=0x8026416d8, 
htx=0x8027c8e40, px=0x8026b3c00, uri=0x802638000) at src/stats.c:2079

    appctx = 0x802678380
    s = 0x802641400
    rep = 0x802641470
    sv = 0x8027c8e40
    svs = 0x33be1e0
    l = 0x4d31df 
    flags = 0
#2  0x004d4139 in stats_dump_stat_to_buffer (si=0x8026416d8, 
htx=0x8027c8e40, uri=0x802638000) at src/stats.c:2652

    appctx = 0x802678380
    rep = 0x802641470
    px = 0x8026b3c00
#3  0x004d56bb in htx_stats_io_handler (appctx=0x802678380) at 
src/stats.c:3299

    si = 0x8026416d8
    s = 0x802641400
    req = 0x802641410
    res = 0x802641470
    req_htx = 0x8027c8e40
    res_htx = 0x8027c8e40
#4  0x004d2546 in http_stats_io_handler (appctx=0x802678380) at 
src/stats.c:3367

    si = 0x8026416d8
    s = 0x802641400
    req = 0x802641410
    res = 0x802641470
#5  0x005f729f in task_run_applet (t=0x8026566e0, 
context=0x802678380, state=16385) at src/applet.c:85

    app = 0x802678380
    si = 0x8026416d8
#6  0x005f2533 in process_runnable_tasks () at src/task.c:435
    t = 0x8026566e0
    state = 16385
    ctx = 0x802678380
    process = 0x5f7200 
    t = 0x8026566e0
    max_processed = 199
#7  0x005163b2 in run_poll_loop () at src/haproxy.c:2619
    next = 0
    exp = 1137019023
#8  0x00513008 in run_thread_poll_loop (data=0x8026310f0) at 
src/haproxy.c:2684

    start_lock = 0
    ptif = 0x935d40 
    ptdf = 0x0
#9  0x0050f636 in main (argc=4, argv=0x7fffeb08) at 
src/haproxy.c:3313

    tids = 0x8026310f0
    threads = 0x8026310f8
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 4052, rlim_max = 4052}
    errmsg = 
"\000\353\377\377\377\177\000\000\060\353\377\377\377\177\000\000\b\353\377\377\377\177\000\000\004\000\000\000\000\000\000\000t\240\220?\260|6\224`)\224\000\000\000\000\000\000\353\377\377\377\177\000\000\060\353\377\377\377\177\000\000\b\353\377\377\377\177\000\000\004\000\000\000\000\000\000\000\240\352\377\377\377\177\000\000R\201\000\002\b\000\000\000\001\000\000"

    pidfd = -1

From 838ecb4e153c1d859d0a49e0554ff050ff10033c Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sat, 12 Jan 2019 21:57:48 +0100
Subject: [PATCH] REGTEST: checks basic stats webpage functionality

This regtest verifies that the stats webpage can be used to change a
server state to maintenance or drain, and that filtering the page scope
will result in a filtered page.
---
 .../h_webstats-scope-and-post-change.vtc  | 83 +++
 1 file changed, 83 insertions(+)
 create mode 100644 reg-tests/webstats/h_webstats-scope-and-post-change.vtc

diff --git a/reg-tests/webstats/h_webstats-scope-and-post-change.vtc 
b/reg-tests/webstats/h_webstats-scope-and-post-change.vtc
new file mode 100644
index ..a77483b5
--- /dev/null
+++ b/reg-tests/webstats/h_webstats-scope-and-post-change.vtc
@@ -0,0 +1,83 @@
+varnishtest "Webgui stats page check filtering with scope and changing server 
state"
+#REQUIRE_VERSION=1.6
+
+feature ignore_unknown_macro
+
+server s1 {
+} -start
+
+haproxy h1 -conf {
+  global
+stats socket /tmp/haproxy.socket level admin
+
+  defaults
+mode http
+${no-htx} option http-use-htx
+
+  frontend fe1
+bind "fd@${fe1}"
+stats enable
+stats refresh 5s
+stats uri /
+stats admin if TRUE
+
+  backend b1
+server srv1 ${s1_addr}:${s1_port}
+server srv2 ${s1_addr}:${s1_port}
+server srv3 ${s1_addr}:${s1_port}
+
+  backend b2
+server srv1 ${s1_addr}:${s1_port}
+server srv2 ${s1_addr}:${s1_port}
+

Re: Lots of mail from email alert on 1.9.x

2019-01-12 Thread PiBa-NL

Hi Willy, Olivier,

Op 12-1-2019 om 13:11 schreef Willy Tarreau:

Hi Pieter,

it is needed to prepend this at the beginning of chk_report_conn_err() :

if (!check->server)
return;

We need to make sure that check->server is properly tested everywhere.
With a bit of luck this one was the only remnant.

Thanks!
Willy


With the check above added, mail alerts seem to work properly here, or 
just as good as they used to anyhow.


Once the patches and above addition get committed, that leaves the other 
'low priority' issue of needing a short timeout to send the exact amount 
of 'expected' mails.

    EXPECT resp.http.mailsreceived (10) == "16" failed
To be honest i only noticed it due to making the regtest, and 
double-checking what to expect.. When i validated mails on my actual 
environment it seems to work properly. (Though the server i took out to 
test has a health-check with a 60 second interval..) Anyhow its been 
like this for years afaik, i guess it wont matter much if stays like 
this a bit longer.


Regards,
PiBa-NL (Pieter)




Re: Lots of mail from email alert on 1.9.x

2019-01-11 Thread PiBa-NL

Hi Olivier,

Op 11-1-2019 om 19:17 schreef Olivier Houchard:

Ok so erm, I'd be lying if I claimed I enjoy working on the check code, or
that I understand it fully. However, after talking with Willy and Christopher,
I think I may have comed with an acceptable solution, and the attached patch
should fix it (at least by getting haproxy to segfault, but it shouldn't
mailbomb you anymore).
Pieter, I'd be very interested to know if it still work with your setup.
It's a different way of trying to fix what you tried ot fix with
1714b9f28694d750d446917672dd59c46e16afd7
I'd like to be sure I didn't break it for you again:)

Regards,

Olivier

(Slightly modified patches, I think there were a potential race condition
when running with multiple threads).

Olivier


Thanks for this 'change in behavior' ;). Indeed the mailbomb is fixed, 
and it seems the expected mails get generated and delivered, but a 
segfault also happens on occasion. Not with the regtest as it was, but 
with a few minor modifications (adding a unreachable mailserver, and 
giving it a little more time seems to be the most reliable reproduction 
a.t.m.) it will crash consistently after 11 seconds.. So i guess the 
patch needs a bit more tweaking.


Regards,
PiBa-NL (Pieter)

Core was generated by `haproxy -d -f /tmp/vtc.37274.4b8a1a3a/h1/cfg'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00500955 in chk_report_conn_err (check=0x802616a10, 
errno_bck=0, expired=1) at src/checks.c:689

689 dns_trigger_resolution(check->server->dns_requester);
(gdb) bt full
#0  0x00500955 in chk_report_conn_err (check=0x802616a10, 
errno_bck=0, expired=1) at src/checks.c:689

    cs = 0x8027de0c0
    conn = 0x802683180
    err_msg = 0x80266d0c0 " at step 1 of tcp-check (connect)"
    chk = 0x80097b848
    step = 1
    comment = 0x0
#1  0x005065a5 in process_chk_conn (t=0x802656640, 
context=0x802616a10, state=513) at src/checks.c:2261

    check = 0x802616a10
    proxy = 0x8026c3000
    cs = 0x8027de0c0
    conn = 0x802683180
    rv = 0
    ret = 0
    expired = 1
#2  0x0050596e in process_chk (t=0x802656640, 
context=0x802616a10, state=513) at src/checks.c:2330

    check = 0x802616a10
#3  0x004fe0a2 in process_email_alert (t=0x802656640, 
context=0x802616a10, state=513) at src/checks.c:3210

    check = 0x802616a10
    q = 0x802616a00
    alert = 0x7fffe340
#4  0x005f2523 in process_runnable_tasks () at src/task.c:435
    t = 0x802656640
    state = 513
    ctx = 0x802616a10
    process = 0x4fdeb0 
    t = 0x8026566e0
    max_processed = 200
#5  0x005163a2 in run_poll_loop () at src/haproxy.c:2619
    next = 1062130135
    exp = 1062129684
#6  0x00512ff8 in run_thread_poll_loop (data=0x8026310f0) at 
src/haproxy.c:2684

    start_lock = 0
    ptif = 0x935d40 
---Type  to continue, or q  to quit---
    ptdf = 0x0
#7  0x0050f626 in main (argc=4, argv=0x7fffead8) at 
src/haproxy.c:3313

    tids = 0x8026310f0
    threads = 0x8026310f8
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 4046, rlim_max = 4046}
    errmsg = 
"\000\352\377\377\377\177\000\000\000\353\377\377\377\177\000\000\330\352\377\377\377\177\000\000\004\000\000\000\000\000\000\00 
0\b\250\037\315})5:`)\224\000\000\000\000\000\320\352\377\377\377\177\000\000\000\353\377\377\377\177\000\000\330\352\377\377\377\177\000\000\004 
\000\000\000\000\000\00
 reg-tests/mailers/k_healthcheckmail.vtc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/reg-tests/mailers/k_healthcheckmail.vtc 
b/reg-tests/mailers/k_healthcheckmail.vtc
index d3af3589..820191c8 100644
--- a/reg-tests/mailers/k_healthcheckmail.vtc
+++ b/reg-tests/mailers/k_healthcheckmail.vtc
@@ -48,6 +48,7 @@ defaults
 #  timeout mail 20s
 #  timeout mail 200ms
   mailer smtp1 ${h1_femail_addr}:${h1_femail_port}
+  mailer smtp2 ipv4@192.0.2.100:1025
 
 } -start
 
@@ -62,7 +63,7 @@ client c1 -connect ${h1_luahttpservice_sock} {
 
 delay 2
 server s2 -repeat 5 -start
-delay 5
+delay 10
 
 client c2 -connect ${h1_luahttpservice_sock} {
 timeout 2


Re: Lots of mail from email alert on 1.9.x

2019-01-10 Thread PiBa-NL

Hi Johan, Olivier, Willy,

Op 10-1-2019 om 17:00 schreef Johan Hendriks:

I just updated to 1.9.1 on my test system.

We noticed that when a server fails we now get tons of mail, and with
tons we mean a lot.

After a client backend server fails we usually get 1 mail on 1.8.x now
with 1.9.1 within 1 minute we have the following.

mailq | grep -B2 l...@testdomain.nl | grep '^[A-F0-9]' | awk '{print
$1}' | sed 's/*//' | postsuper -d -
postsuper: Deleted: 19929 messages

My setting from the backend part is as follows.

     email-alert mailers alert-mailers
     email-alert from l...@testdomain.nl
     email-alert to not...@testdomain.nl
     server webserver09 11.22.33.44:80 check

Has something changed in 1.9.x (it was on 1.9.0 also)

regards
Johan Hendriks


Its a 'known issue' see: 
https://www.mail-archive.com/haproxy@formilux.org/msg32290.html
a 'regtest' is added in that mail thread also to aid developers in 
reproducing the issue and validating a possible fix.


@Olivier, Willy, may i assume this mailbomb feature is 'planned' to get 
fixed in 1.9.2 ? (perhaps a bugtracker with a 'target version' would be 
nice ;) ?)


Regards,
PiBa-NL (Pieter)



Re: [PATCH] REGTEST: filters: add compression test

2019-01-09 Thread PiBa-NL

Thank you Christopher & Frederic.

Op 9-1-2019 om 14:47 schreef Christopher Faulet:

Le 09/01/2019 à 10:43, Frederic Lecaille a écrit :

On 1/8/19 11:25 PM, PiBa-NL wrote:

Hi Frederic,


Hi Pieter,


Op 7-1-2019 om 10:13 schreef Frederic Lecaille:

On 12/23/18 11:38 PM, PiBa-NL wrote:

As requested hereby the regtest send for inclusion into the git
repository.

It is OK like that.

Note that you patch do not add reg-test/filters/common.pem which could
be a symlink to ../ssl/common.pem.
Also note that since 8f16148Christopher's commit, we add such a line
where possible:
 ${no-htx} option http-use-htx
We should also rename your test files to reg-test/filters/h0.*
Thank you.
Fred.


Together with these changes you have supplied me already off-list, i've
also added a " --max-time 15" for the curl request, that should be
sufficient for most systems to complete the 3 second testcase, and
allows the shell command to complete without varnishtest killing it
after a timeout and not showing any of the curl output..

One last question, currently its being added to a new folder:
reg-test/filters/ , perhaps it should be in reg-test/compression/ ?
If you agree that needs changing i guess that can be done upon
committing it?


I have modified your patch to move your new files to 
reg-test/compression.


I have also applied this to it ;) :   's/\r$//'



Note that the test fails on my FreeBSD system when using HTX when using
'2.0-dev0-251a6b7 2019/01/08', i'm not aware it ever worked (i didn't
test it with HTX before..).
 top  15.2 shell_out|curl: (28) Operation timed out after 15036
milliseconds with 187718 bytes received


Ok, I will take some time to have a look at this BSD specific issue.
Note that we can easily use the CLI at the end of the script to
troubleshooting anything.


Log attached.. Would it help to log it with the complete "filter trace
name BEFORE / filter compression / filter trace name AFTER" ? Or are
there other details i could try and gather?


I do not fill at ease enough on compression/filter topics to reply
to your question Pieter ;)

Nevertheless I think your test deserve to be merged.

*The patch to be merged is attached to this mail*.

Thank a lot Pieter.



Thanks Fred and Pieter, now merged. I've just updated the patch to add 
the list of required options in the VTC file.


Hereby just a little confirmation that this works well now :) also in my 
tests.


Regards,
PiBa-NL (Pieter)



coredump in h2_process_mux with 1.9.0-8223050

2019-01-08 Thread PiBa-NL

Hi List, Willy,

Got a coredump of 1.9.0-8223050 today, see below. Would this be 'likely' 
the same one with the 'PRIORITY' that 1.9.1 fixes?
I don't have any idea what the exact circumstance request/response was.. 
Anyhow i updated my system to 2.0-dev0-251a6b7 for the moment, lets see 
if something strange happens again. Might take a few days though, IF it 
still occurs..


Regards,

PiBa-NL (Pieter)

Core was generated by `/usr/local/sbin/haproxy -f 
/var/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid'.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x004b91c7 in h2_process_mux (h2c=0x802657480) at 
src/mux_h2.c:2434

2434    src/mux_h2.c: No such file or directory.
(gdb) bt full
#0  0x004b91c7 in h2_process_mux (h2c=0x802657480) at 
src/mux_h2.c:2434

    h2s = 0x80262c7a0
    h2s_back = 0x80262ca40
#1  0x004b844d in h2_send (h2c=0x802657480) at src/mux_h2.c:2560
    flags = 0
    conn = 0x8026dc300
    done = 0
    sent = 1
#2  0x004b8a49 in h2_process (h2c=0x802657480) at src/mux_h2.c:2640
    conn = 0x8026dc300
#3  0x004b32e1 in h2_wake (conn=0x8026dc300) at src/mux_h2.c:2715
    h2c = 0x802657480
#4  0x005c8158 in conn_fd_handler (fd=7) at src/connection.c:190
    conn = 0x8026dc300
    flags = 0
    io_available = 0
#5  0x005e3c7c in fdlist_process_cached_events (fdlist=0x9448f0 
) at src/fd.c:441

    fd = 7
    old_fd = 7
    e = 117
#6  0x005e377c in fd_process_cached_events () at src/fd.c:459
No locals.
#7  0x00514296 in run_poll_loop () at src/haproxy.c:2655
    next = 762362654
    exp = 762362654
#8  0x00510b78 in run_thread_poll_loop (data=0x802615970) at 
src/haproxy.c:2684

    start_lock = 0
    ptif = 0x92ed10 
    ptdf = 0x0
#9  0x0050d1a6 in main (argc=6, argv=0x7fffec60) at 
src/haproxy.c:3313

    tids = 0x802615970
    threads = 0x802615998
    i = 1
    old_sig = {__bits = {0, 0, 0, 0}}
    blocked_sig = {__bits = {4227856759, 4294967295, 4294967295, 
4294967295}}

    err = 0
    retry = 200
    limit = {rlim_cur = 2040, rlim_max = 2040}
    errmsg = 
"\000\354\377\377\377\177\000\000\230\354\377\377\377\177\000\000`\354\377\377\377\177\000\000\006\000\000\000\000\000\000\000\f\373\353\230\373\032\351~\240\270\223\000\000\000\000\000X\354\377\377\377\177\000\000\230\354\377\377\377\177\000\000`\354\377\377\377\177\000\000\006\000\000\000\000\000\000\000\000\354\377\377\377\177\000\000\302z\000\002\b\000\000\000\001\000\000"

    pidfd = 17
(gdb)




Re: [PATCH] REGTEST: filters: add compression test

2019-01-08 Thread PiBa-NL

Hi Frederic,

Op 7-1-2019 om 10:13 schreef Frederic Lecaille:

On 12/23/18 11:38 PM, PiBa-NL wrote:
As requested hereby the regtest send for inclusion into the git 
repository.

It is OK like that.

Note that you patch do not add reg-test/filters/common.pem which could 
be a symlink to ../ssl/common.pem.
Also note that since 8f16148Christopher's commit, we add such a line 
where possible:

    ${no-htx} option http-use-htx
We should also rename your test files to reg-test/filters/h0.*
Thank you.
Fred.


Together with these changes you have supplied me already off-list, i've 
also added a " --max-time 15" for the curl request, that should be 
sufficient for most systems to complete the 3 second testcase, and 
allows the shell command to complete without varnishtest killing it 
after a timeout and not showing any of the curl output..


One last question, currently its being added to a new folder: 
reg-test/filters/ , perhaps it should be in reg-test/compression/ ?
If you agree that needs changing i guess that can be done upon 
committing it?


Note that the test fails on my FreeBSD system when using HTX when using 
'2.0-dev0-251a6b7 2019/01/08', i'm not aware it ever worked (i didn't 
test it with HTX before..).
 top  15.2 shell_out|curl: (28) Operation timed out after 15036 
milliseconds with 187718 bytes received


Log attached.. Would it help to log it with the complete "filter trace 
name BEFORE / filter compression / filter trace name AFTER" ? Or are 
there other details i could try and gather?


Regards,

PiBa-NL (Pieter)

From 793e770b399157a1549a2655612a29845b165dd6 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:21:51 +0100
Subject: [PATCH] REGTEST: filters: add compression test

This test checks that data transferred with compression is correctly received 
at different download speeds
---
 reg-tests/filters/common.pem |  1 +
 reg-tests/filters/s0.lua | 19 ++
 reg-tests/filters/s0.vtc | 59 
 3 files changed, 79 insertions(+)
 create mode 12 reg-tests/filters/common.pem
 create mode 100644 reg-tests/filters/s0.lua
 create mode 100644 reg-tests/filters/s0.vtc

diff --git a/reg-tests/filters/common.pem b/reg-tests/filters/common.pem
new file mode 12
index ..a4433d56
--- /dev/null
+++ b/reg-tests/filters/common.pem
@@ -0,0 +1 @@
+../ssl/common.pem
\ No newline at end of file
diff --git a/reg-tests/filters/s0.lua b/reg-tests/filters/s0.lua
new file mode 100644
index ..2cc874b9
--- /dev/null
+++ b/reg-tests/filters/s0.lua
@@ -0,0 +1,19 @@
+
+local data = "abcdefghijklmnopqrstuvwxyz"
+local responseblob = ""
+for i = 1,1 do
+  responseblob = responseblob .. "\r\n" .. i .. data:sub(1, math.floor(i % 27))
+end
+
+http01applet = function(applet)
+  local response = responseblob
+  applet:set_status(200)
+  applet:add_header("Content-Type", "application/javascript")
+  applet:add_header("Content-Length", string.len(response)*10)
+  applet:start_response()
+  for i = 1,10 do
+applet:send(response)
+  end
+end
+
+core.register_service("fileloader-http01", "http", http01applet)
diff --git a/reg-tests/filters/s0.vtc b/reg-tests/filters/s0.vtc
new file mode 100644
index ..231344a6
--- /dev/null
+++ b/reg-tests/filters/s0.vtc
@@ -0,0 +1,59 @@
+# Checks that compression doesnt cause corruption..
+
+varnishtest "Compression validation"
+#REQUIRE_VERSION=1.6
+
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+global
+#  log stdout format short daemon
+   lua-load${testdir}/s0.lua
+
+defaults
+   modehttp
+   log global
+   ${no-htx} option http-use-htx
+   option  httplog
+
+frontend main-https
+   bind"fd@${fe1}" ssl crt ${testdir}/common.pem
+   compression algo gzip
+   compression type text/html text/plain application/json 
application/javascript
+   compression offload
+   use_backend TestBack  if  TRUE
+
+backend TestBack
+   server  LocalSrv ${h1_fe2_addr}:${h1_fe2_port}
+
+listen fileloader
+   mode http
+   bind "fd@${fe2}"
+   http-request use-service lua.fileloader-http01
+} -start
+
+shell {
+HOST=${h1_fe1_addr}
+if [ "${h1_fe1_addr}" = "::1" ] ; then
+HOST="\[::1\]"
+fi
+
+md5=$(which md5 || which md5sum)
+
+if [ -z $md5 ] ; then
+echo "MD5 checksum utility not found"
+exit 1
+fi
+
+expectchecksum="4d9c62aa5370b8d5f84f17ec2e78f483"
+
+for opt in "" "--limit-rate 300K" "--limit-rate 500K" ; do
+checksum=$(curl --max-time 15 --compressed -k 
"https://$HOST:${h1_fe1_port}"; $opt | $m

Re: regtests - with option http-use-htx

2019-01-08 Thread PiBa-NL

Hi Frederic,

Op 8-1-2019 om 16:27 schreef Frederic Lecaille:

On 12/15/18 4:52 PM, PiBa-NL wrote:

Hi List, Willy,

Trying to run some existing regtests with added option: option 
http-use-htx


Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 
expectation for syslog. This surely needs a closer look?

#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but 
then the 'old' expectation seems wrong, and the bug is the case 
without htx?


Note that the server s1 never responds.

Furthermore, c1 client is run with -run argument.
This means that we wait for its termination before running accessing CLI.
Then we check that there is no consistency issue with the stick-table:

if the entry has expired we get only this line:

    table: http1, type: ip, size:1024, used:0

if not we get these two lines:

    table: http1, type: ip, size:1024, used:1
    .*    use=0 ...

here used=1 means there is still an entry in the stick-table, and 
use=0 means it is not currently in use (I guess this is because the 
client has closed its connection).


I do not reproduce your issue with this script both on Linux and 
FreeBSD 11 both with or without htx.
Did you try with the 'old' development version (1.9-dev10-c11ec4a 
2018/12/15), i think in current version its already fixed see my own 
test results also below.
 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, 
size:1024, used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* 
gpt0=0 gpc0=0 gpc0_rate\(1\)=0 conn_rate\(1\)=1 
http_req_cnt=1 http_req_rate\(1\)=1 http_err_cnt=0 
http_err_rate\(10000\)=0)\n$"


Regards,

PiBa-NL (Pieter)



I tried again today with both 2.0-dev0-251a6b7 and 1.9.0-8223050 and  
1.9-dev10-c11ec4a :


HA-Proxy version 2.0-dev0-251a6b7 2019/01/08 - https://haproxy.org/
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.146)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.147)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed


HA-Proxy version 1.9.0-8223050 2018/12/19 - https://haproxy.org/
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.150)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.128)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.148)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed


HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15
Copyright 2000-2018 Willy Tarreau 
## Without HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (0.146)
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc passed (0.127)
0 tests failed, 0 tests skipped, 2 tests passed
## With HTX
#    top  TEST ./PB-TEST/2018/connection-b0.vtc passed (8.646)
*    top   0.0 TEST ./PB-TEST/2018/stick-table-b1.vtc starting
 h1    0.0 CLI recv|# table: http1, type: ip, size:1024, used:1
 h1    0.0 CLI recv|0x80262a200: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"

*    top   0.0 RESETTING after ./PB-TEST/2018/stick-table-b1.vtc
**   h1    0.0 Reset and free h1 haproxy 92940
#    top  TEST ./PB-TEST/2018/stick-table-b1.vtc FAILED (0.127) exit=2
1 tests failed, 0 tests skipped, 1 tests passed

With the 'old' 1.9-dev10 version and with HTX i can still reproduce the 
"passed (8.646)" and "use=1".. But both 1.9.0 and 2.0-dev don't show 
that behavior. I have not 'bisected' further, but i don't think there is 
anything to do a.t.m. regarding this old (already fixed) issue.


Regards,

PiBa-NL (Pieter)




Re: compression in defaults happens twice with 1.9.0

2019-01-07 Thread PiBa-NL

Hi Christopher,

Op 7-1-2019 om 16:32 schreef Christopher Faulet:

Le 06/01/2019 à 16:22, PiBa-NL a écrit :

Hi List,

Using both 1.9.0 and 2.0-dev0-909b9d8 compression happens twice when
configured in defaults.
This was noticed by user walle303 on IRC.

Seems like a bug to me as 1.8.14 does not show this behavior. Attached a
little regtest that reproduces the issue.

Can someone take a look, thanks in advance.



Hi Pieter,

Here is the patch that should fix this issue. Could you confirm please ?

Thanks

Works for me. Thanks!

Regards,
PiBa-NL (Pieter)



Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-07 Thread PiBa-NL

Hi Willy,
Op 7-1-2019 om 15:25 schreef Willy Tarreau:

Hi Pieter,

On Sun, Jan 06, 2019 at 04:38:21PM +0100, PiBa-NL wrote:

The 23654 mails received for a failed server is a bit much..

I agree. I really don't know much how the mails work to be honest, as
I have never used them. I remember that we reused a part of the tcp-check
infrastructure because by then it offered a convenient way to proceed with
send/expect sequences. Maybe there's something excessive in the sequence
there, such as a certain status code being expected at the end while the
mail succeeds, I don't know.

Given that this apparently has always been broken,
For 1 part its always been broken (needing the short mailer timeout to 
send all expected mails), for the other part, at least until 1.8.14 it 
used to NOT send thousands of mails so that would be a regression in the 
current 1.9 version that should get fixed on a shorter term.

I'm hesitant between
merging this in the slow category or the broken one. My goal with "broken"
was to keep the scripts that trigger broken behaviours that need to be
addressed, rather than keep broken scripts.
Indeed keeping broken scripts wouldn't be help-full in the long run, 
unless there is still the intent to fix them. It isn't what the makefile 
says about 'LEVEL 5' though. It says its for 'broken scripts' and to 
quickly disable them, not as you write here for scripts that show 
'broken haproxy behavior'.

  My goal is to make sure we
never consider it normal to have failures in the regular test suite,
otherwise you know how it becomes, just like compiler warnings, people
say "oh I didn't notice this new error in the middle of all other ones".
Agreed, though i will likely fall into repeat some day, apology in 
advance ;).. I guess we could 'fix' the regtest by specifying the 
'timeout mail 200', that would fix it for 1.7 and 1.8.. And might help 
for 1.9 regressiontests and to get it fixed to at least not send 
thousands of mails. We might forget about the short time requirement 
then though, which seems strange as well. And the test wouldn't be 1.6 
compatible as it doesn't have that setting at all.

Thus probably the best thing to do is to use it at level 5 so that it's
easier to work on the bug without triggering false positives when doing
regression testing.

What's your opinion ?


With a changed description for 'level 5' being 'shows broken haproxy 
behavior, to be fixed in a future release' i think it would fit in there 
nicely. Can you change the starting letter of the .vtc test (and the 
.lua and reference to that) to 'k' during committing? Or shall i re-send it?


p.s. What do you think about the 'naming' of the test? 
'k_healthcheckmail.vtc' or 'k0.vtc' personally i don't think the 
'numbering' of tests makes them easier to use.?.



thanks,
Willy


Regards,

PiBa-NL (Pieter)




Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-06 Thread PiBa-NL

Hi,
2 weeks passed without reply, so a little hereby a little 'bump'.. I 
know everyone has been busy, but would be nice to get test added or at 
least the biggest issue of the 'mailbomb' fixed before next release. If 
its 'scheduled' to get looked at later thats okay. Just making sure it 
aint forgotten about :).


The 23654 mails received for a failed server is a bit much..
 c2    7.5 EXPECT resp.http.mailsreceived (23654) == "16" failed

Regards,
PiBa-NL (Pieter)

Op 23-12-2018 om 23:37 schreef PiBa-NL:

Changed subject of patch requirement to 'REGTEST'.

Op 23-12-2018 om 21:17 schreef PiBa-NL:

Hi List,

Attached a new test to verify that the 'mailers' section is working 
properly.

Currently with 1.9 the mailers sends thousands of mails for my setup...

As the test is rather slow i have marked it with a starting letter 's'.

Note that the test also fails on 1.6/1.7/1.8 but can be 'fixed' there 
by adding a 'timeout mail 200ms'.. (except on 1.6 which doesn't have 
that setting.)


I don't think that should be needed though if everything was working 
properly?


If the test could be committed, and related issues exposed fixed that 
would be neat ;)


Thanks in advance,

PiBa-NL (Pieter)








compression in defaults happens twice with 1.9.0

2019-01-06 Thread PiBa-NL

Hi List,

Using both 1.9.0 and 2.0-dev0-909b9d8 compression happens twice when 
configured in defaults.

This was noticed by user walle303 on IRC.

Seems like a bug to me as 1.8.14 does not show this behavior. Attached a 
little regtest that reproduces the issue.


Can someone take a look, thanks in advance.

Regards,

PiBa-NL (Pieter)

 s1    0.0 
txresp|!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_

 s1    0.0 txresp|"#$%&'()*+,-./0123456789:;<=>?@ABCD
***  s1    0.0 shutting fd 4
**   s1    0.0 Ending
***  h1    0.0 debug|:b1.srvrep[000a:adfd]: HTTP/1.1 200 OK
***  h1    0.0 debug|:b1.srvhdr[000a:adfd]: Content-Length: 100
***  h1    0.0 debug|:b1.srvcls[000a:adfd]
 c1    0.0 rxhdr|HTTP/1.1 200 OK\r
 c1    0.0 rxhdr|Content-Encoding: gzip\r
 c1    0.0 rxhdr|Transfer-Encoding: chunked\r
 c1    0.0 rxhdr|Co57\r
 c1    0.0 rxhdr|\037\213\010
 c1    0.0 rxhdrlen = 78
 c1    0.0 http[ 0] |HTTP/1.1
 c1    0.0 http[ 1] |200
 c1    0.0 http[ 2] |OK
 c1    0.0 http[ 3] |Content-Encoding: gzip
 c1    0.0 http[ 4] |Transfer-Encoding: chunked
 c1    0.0 http[ 5] |Co57
 c1    0.0 http[ 6] |\037\213\010
 c1   10.2 HTTP rx timeout (fd:7 1 ms)


# Checks compression defined in defaults doesnt happen twice

varnishtest "Compression in defaults"
feature ignore_unknown_macro

server s1 {
rxreq
txresp -bodylen 100
} -start

haproxy h1 -conf {
  defaults
mode http
compression algo gzip

  frontend fe1
bind "fd@${fe_1}"
default_backend b1

  backend b1
server srv1 ${s1_addr}:${s1_port} 

} -start

client c1 -connect ${h1_fe_1_sock} {
txreq -url "/" -hdr "Accept-Encoding: gzip"
rxresp
expect resp.status == 200
expect resp.http.content-encoding == "gzip"
expect resp.http.transfer-encoding == "chunked"
gunzip
expect resp.bodylen == 100

} -run
server s1 -wait

Re: htx with compression issue, "Gunzip error: Body lacks gzip magics"

2019-01-02 Thread PiBa-NL

Hi Christopher, Willy,

Op 2-1-2019 om 15:37 schreef Christopher Faulet:

Le 29/12/2018 à 01:29, PiBa-NL a écrit :
compression with htx, and a slightly delayed body content it will 
prefix some rubbish and corrupt the gzip header..

Hi Pieter,

In fact, It is not a bug related to the compression. But a pure HTX 
one, about the defragmentation when we need space to store data. Here 
is a patch. It fixes the problem for me.
Okay so the compression somehow 'triggers' this defragmentation to 
happen, are there simpler ways to make that happen 'on demand' ?
Willy, if it is ok for you, I can merge it in upstream and backport it 
in 1.9.

--
Christopher Faulet
The patch fixes the reg-test for me as well, I guess its good to go :). 
Thanks.


Regards,
PiBa-NL (Pieter)




htx with compression issue, "Gunzip error: Body lacks gzip magics"

2018-12-28 Thread PiBa-NL

Hi List,

When using compression with htx, and a slightly delayed body content it 
will prefix some rubbish and corrupt the gzip header..


Below output i get with attached test.. Removing http-use-htx 'fixes' 
the test.


This happens with both 1.9.0 and todays commit a2dbeb2, not sure if this 
ever worked before..


 c1    0.1 len|1A\r
 c1    0.1 
chunk|\222\7\0\0\0\377\377\213\10\0\0\0\0\0\4\3JLJN\1\0\0\0\377\377

 c1    0.1 len|0\r
 c1    0.1 bodylen = 26
**   c1    0.1 === expect resp.status == 200
 c1    0.1 EXPECT resp.status (200) == "200" match
**   c1    0.1 === expect resp.http.content-encoding == "gzip"
 c1    0.1 EXPECT resp.http.content-encoding (gzip) == "gzip" match
**   c1    0.1 === gunzip
 c1    0.1 Gunzip error: Body lacks gzip magics

Can someone take a look? Thanks in advance.

Regards,

PiBa-NL (Pieter)

# Checks htx with compression and a short delay between headers and data send 
by the server

varnishtest "Connection counters check"
feature ignore_unknown_macro

server s1 {
rxreq
txresp -nolen -hdr "Content-Length: 4"
delay 0.05
send "abcd"
} -start

haproxy h1 -conf {
  global
stats socket /tmp/haproxy.socket level admin

  defaults
mode http
option http-use-htx

  frontend fe1
bind "fd@${fe1}"
compression algo gzip
#filter trace name BEFORE-HTTP-COMP
#filter compression
#filter trace name AFTER-HTTP-COMP
default_backend b1
  backend b1
server srv1 ${s1_addr}:${s1_port}

} -start

# configure port for lua to call fe4
client c1 -connect ${h1_fe1_sock} {
txreq -url "/" -hdr "Accept-Encoding: gzip"
rxresp
expect resp.status == 200
expect resp.http.content-encoding == "gzip"
gunzip
expect resp.body == "abcd"
} -run


Re: [PATCH] REGTEST: filters: add compression test

2018-12-23 Thread PiBa-NL

Added LUA requirement into the test..

Op 23-12-2018 om 23:05 schreef PiBa-NL:

Hi Frederic,

As requested hereby the regtest send for inclusion into the git 
repository. Without randomization and with your .diff applied. Also 
outputting expected and actual checksum if the test fails so its clear 
that that is the issue detected.


Is it okay like this? Should the blob be bigger? As you mentioned 
needing a 10MB output to reproduce the original issue on your machine?


Regards,

PiBa-NL (Pieter)



From 29c3b9d344f360503bcd30f48558ca8a51df92ed Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:21:51 +0100
Subject: [PATCH] REGTEST: filters: add compression test

This test checks that data transferred with compression is correctly received 
at different download speeds
---
 reg-tests/filters/b5.lua | 19 
 reg-tests/filters/b5.vtc | 59 
 2 files changed, 78 insertions(+)
 create mode 100644 reg-tests/filters/b5.lua
 create mode 100644 reg-tests/filters/b5.vtc

diff --git a/reg-tests/filters/b5.lua b/reg-tests/filters/b5.lua
new file mode 100644
index ..6dbe1d33
--- /dev/null
+++ b/reg-tests/filters/b5.lua
@@ -0,0 +1,19 @@
+
+local data = "abcdefghijklmnopqrstuvwxyz"
+local responseblob = ""
+for i = 1,1 do
+  responseblob = responseblob .. "\r\n" .. i .. data:sub(1, math.floor(i % 27))
+end
+
+http01applet = function(applet) 
+  local response = responseblob
+  applet:set_status(200) 
+  applet:add_header("Content-Type", "application/javascript") 
+  applet:add_header("Content-Length", string.len(response)*10) 
+  applet:start_response() 
+  for i = 1,10 do
+applet:send(response) 
+  end
+end
+
+core.register_service("fileloader-http01", "http", http01applet)
diff --git a/reg-tests/filters/b5.vtc b/reg-tests/filters/b5.vtc
new file mode 100644
index ..2f4982cb
--- /dev/null
+++ b/reg-tests/filters/b5.vtc
@@ -0,0 +1,59 @@
+# Checks that compression doesnt cause corruption..
+
+varnishtest "Compression validation"
+#REQUIRE_VERSION=1.6
+#REQUIRE_OPTIONS=LUA
+
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+global
+#  log stdout format short daemon 
+   lua-load${testdir}/b5.lua
+
+defaults
+   modehttp
+   log global
+   option  httplog
+
+frontend main-https
+   bind"fd@${fe1}" ssl crt ${testdir}/common.pem
+   compression algo gzip
+   compression type text/html text/plain application/json 
application/javascript
+   compression offload
+   use_backend TestBack  if  TRUE
+
+backend TestBack
+   server  LocalSrv ${h1_fe2_addr}:${h1_fe2_port}
+
+listen fileloader
+   mode http
+   bind "fd@${fe2}"
+   http-request use-service lua.fileloader-http01
+} -start
+
+shell {
+HOST=${h1_fe1_addr}
+if [ "${h1_fe1_addr}" = "::1" ] ; then
+HOST="\[::1\]"
+fi
+
+md5=$(which md5 || which md5sum)
+
+if [ -z $md5 ] ; then
+echo "MD5 checksum utility not found"
+exit 1
+fi
+
+expectchecksum="4d9c62aa5370b8d5f84f17ec2e78f483"
+
+for opt in "" "--limit-rate 300K" "--limit-rate 500K" ; do
+checksum=$(curl --compressed -k "https://$HOST:${h1_fe1_port}"; $opt | 
$md5 | cut -d ' ' -f1)
+if [ "$checksum" != "$expectchecksum" ] ; then 
+  echo "Expecting checksum $expectchecksum"
+  echo "Received checksum: $checksum"
+  exit 1; 
+fi
+done
+
+} -run
-- 
2.18.0.windows.1



Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2018-12-23 Thread PiBa-NL

Changed subject of patch requirement to 'REGTEST'.

Op 23-12-2018 om 21:17 schreef PiBa-NL:

Hi List,

Attached a new test to verify that the 'mailers' section is working 
properly.

Currently with 1.9 the mailers sends thousands of mails for my setup...

As the test is rather slow i have marked it with a starting letter 's'.

Note that the test also fails on 1.6/1.7/1.8 but can be 'fixed' there 
by adding a 'timeout mail 200ms'.. (except on 1.6 which doesn't have 
that setting.)


I don't think that should be needed though if everything was working 
properly?


If the test could be committed, and related issues exposed fixed that 
would be neat ;)


Thanks in advance,

PiBa-NL (Pieter)



From 8d63f5a39a9b4b326b636e42ccafcf0c2173d752 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:06:31 +0100
Subject: [PATCH] REGTEST: mailers: add new test for 'mailers' section

This test verifies the mailers section works properly by checking that it sends 
the proper amount of mails when health-checks are changing and or marking a 
server up/down

The test currently fails on all versions of haproxy i tried with varying 
results.
1.9.0 produces thousands of mails..
1.8.14 only sends 1 mail, needs a 200ms 'timeout mail' to succeed
1.7.11 only sends 1 mail, needs a 200ms 'timeout mail' to succeed
1.6 only sends 1 mail, (does not have the 'timeout mail' setting implemented)
---
 reg-tests/mailers/shealthcheckmail.lua | 105 +
 reg-tests/mailers/shealthcheckmail.vtc |  75 ++
 2 files changed, 180 insertions(+)
 create mode 100644 reg-tests/mailers/shealthcheckmail.lua
 create mode 100644 reg-tests/mailers/shealthcheckmail.vtc

diff --git a/reg-tests/mailers/shealthcheckmail.lua 
b/reg-tests/mailers/shealthcheckmail.lua
new file mode 100644
index ..9c75877b
--- /dev/null
+++ b/reg-tests/mailers/shealthcheckmail.lua
@@ -0,0 +1,105 @@
+
+local vtc_port1 = 0
+local mailsreceived = 0
+local mailconnectionsmade = 0
+local healthcheckcounter = 0
+
+core.register_action("bug", { "http-res" }, function(txn)
+   data = txn:get_priv()
+   if not data then
+   data = 0
+   end
+   data = data + 1
+   print(string.format("set to %d", data))
+   txn.http:res_set_status(200 + data)
+   txn:set_priv(data)
+end)
+
+core.register_service("luahttpservice", "http", function(applet)
+   local response = "?"
+   local responsestatus = 200
+   if applet.path == "/setport" then
+   vtc_port1 = applet.headers["vtcport1"][0]
+   response = "OK"
+   end
+   if applet.path == "/svr_healthcheck" then
+   healthcheckcounter = healthcheckcounter + 1
+   if healthcheckcounter < 2 or healthcheckcounter > 6 then
+   responsestatus = 403
+   end
+   end
+
+   applet:set_status(responsestatus)
+   if applet.path == "/checkMailCounters" then
+   response = "MailCounters"
+   applet:add_header("mailsreceived", mailsreceived)
+   applet:add_header("mailconnectionsmade", mailconnectionsmade)
+   end
+   applet:start_response()
+   applet:send(response)
+end)
+
+core.register_service("fakeserv", "http", function(applet)
+   applet:set_status(200)
+   applet:start_response()
+end)
+
+function RecieveAndCheck(applet, expect)
+   data = applet:getline()
+   if data:sub(1,expect:len()) ~= expect then
+   core.Info("Expected: "..expect.." but 
got:"..data:sub(1,expect:len()))
+   applet:send("Expected: "..expect.." but got:"..data.."\r\n")
+   return false
+   end
+   return true
+end
+
+core.register_service("mailservice", "tcp", function(applet)
+   core.Info("# Mailservice Called #")
+   mailconnectionsmade = mailconnectionsmade + 1
+   applet:send("220 Welcome\r\n")
+   local data
+
+   if RecieveAndCheck(applet, "EHLO") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "MAIL FROM:") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "RCPT TO:") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "DATA") == false then
+   return
+   end
+   applet:send("354 OK\r\n")
+   core.Info(" Send your mailbody")
+   local endofmail = false
+ 

[PATCH] REGTEST: filters: add compression test

2018-12-23 Thread PiBa-NL

Hi Frederic,

As requested hereby the regtest send for inclusion into the git 
repository. Without randomization and with your .diff applied. Also 
outputting expected and actual checksum if the test fails so its clear 
that that is the issue detected.


Is it okay like this? Should the blob be bigger? As you mentioned 
needing a 10MB output to reproduce the original issue on your machine?


Regards,

PiBa-NL (Pieter)

From 64460dfeacef3d04af4243396007a606c2e5dbf7 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:21:51 +0100
Subject: [PATCH] REGTEST: filters: add compression test

This test checks that data transferred with compression is correctly received 
at different download speeds
---
 reg-tests/filters/b5.lua | 19 
 reg-tests/filters/b5.vtc | 58 
 2 files changed, 77 insertions(+)
 create mode 100644 reg-tests/filters/b5.lua
 create mode 100644 reg-tests/filters/b5.vtc

diff --git a/reg-tests/filters/b5.lua b/reg-tests/filters/b5.lua
new file mode 100644
index ..6dbe1d33
--- /dev/null
+++ b/reg-tests/filters/b5.lua
@@ -0,0 +1,19 @@
+
+local data = "abcdefghijklmnopqrstuvwxyz"
+local responseblob = ""
+for i = 1,1 do
+  responseblob = responseblob .. "\r\n" .. i .. data:sub(1, math.floor(i % 27))
+end
+
+http01applet = function(applet) 
+  local response = responseblob
+  applet:set_status(200) 
+  applet:add_header("Content-Type", "application/javascript") 
+  applet:add_header("Content-Length", string.len(response)*10) 
+  applet:start_response() 
+  for i = 1,10 do
+applet:send(response) 
+  end
+end
+
+core.register_service("fileloader-http01", "http", http01applet)
diff --git a/reg-tests/filters/b5.vtc b/reg-tests/filters/b5.vtc
new file mode 100644
index ..5216cdaf
--- /dev/null
+++ b/reg-tests/filters/b5.vtc
@@ -0,0 +1,58 @@
+# Checks that compression doesnt cause corruption..
+
+varnishtest "Compression validation"
+#REQUIRE_VERSION=1.6
+
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+global
+#  log stdout format short daemon 
+   lua-load${testdir}/b5.lua
+
+defaults
+   modehttp
+   log global
+   option  httplog
+
+frontend main-https
+   bind"fd@${fe1}" ssl crt ${testdir}/common.pem
+   compression algo gzip
+   compression type text/html text/plain application/json 
application/javascript
+   compression offload
+   use_backend TestBack  if  TRUE
+
+backend TestBack
+   server  LocalSrv ${h1_fe2_addr}:${h1_fe2_port}
+
+listen fileloader
+   mode http
+   bind "fd@${fe2}"
+   http-request use-service lua.fileloader-http01
+} -start
+
+shell {
+HOST=${h1_fe1_addr}
+if [ "${h1_fe1_addr}" = "::1" ] ; then
+HOST="\[::1\]"
+fi
+
+md5=$(which md5 || which md5sum)
+
+if [ -z $md5 ] ; then
+echo "MD5 checksum utility not found"
+exit 1
+fi
+
+expectchecksum="4d9c62aa5370b8d5f84f17ec2e78f483"
+
+for opt in "" "--limit-rate 300K" "--limit-rate 500K" ; do
+checksum=$(curl --compressed -k "https://$HOST:${h1_fe1_port}"; $opt | 
$md5 | cut -d ' ' -f1)
+if [ "$checksum" != "$expectchecksum" ] ; then 
+  echo "Expecting checksum $expectchecksum"
+  echo "Received checksum: $checksum"
+  exit 1; 
+fi
+done
+
+} -run
-- 
2.18.0.windows.1



[PATCH] REG-TEST: mailers: add new test for 'mailers' section

2018-12-23 Thread PiBa-NL

Hi List,

Attached a new test to verify that the 'mailers' section is working 
properly.

Currently with 1.9 the mailers sends thousands of mails for my setup...

As the test is rather slow i have marked it with a starting letter 's'.

Note that the test also fails on 1.6/1.7/1.8 but can be 'fixed' there by 
adding a 'timeout mail 200ms'.. (except on 1.6 which doesn't have that 
setting.)


I don't think that should be needed though if everything was working 
properly?


If the test could be committed, and related issues exposed fixed that 
would be neat ;)


Thanks in advance,

PiBa-NL (Pieter)

From 49a605bfadaafe25de0f084c7d1d449eef9c23aa Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:06:31 +0100
Subject: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

This test verifies the mailers section works properly by checking that it sends 
the proper amount of mails when health-checks are changing and or marking a 
server up/down

The test currently fails on all versions of haproxy i tried with varying 
results.
1.9.0 produces thousands of mails..
1.8.14 only sends 1 mail, needs a 200ms 'timeout mail' to succeed
1.7.11 only sends 1 mail, needs a 200ms 'timeout mail' to succeed
1.6 only sends 1 mail, (does not have the 'timeout mail' setting implemented)
---
 reg-tests/mailers/shealthcheckmail.lua | 105 +
 reg-tests/mailers/shealthcheckmail.vtc |  75 ++
 2 files changed, 180 insertions(+)
 create mode 100644 reg-tests/mailers/shealthcheckmail.lua
 create mode 100644 reg-tests/mailers/shealthcheckmail.vtc

diff --git a/reg-tests/mailers/shealthcheckmail.lua 
b/reg-tests/mailers/shealthcheckmail.lua
new file mode 100644
index ..9c75877b
--- /dev/null
+++ b/reg-tests/mailers/shealthcheckmail.lua
@@ -0,0 +1,105 @@
+
+local vtc_port1 = 0
+local mailsreceived = 0
+local mailconnectionsmade = 0
+local healthcheckcounter = 0
+
+core.register_action("bug", { "http-res" }, function(txn)
+   data = txn:get_priv()
+   if not data then
+   data = 0
+   end
+   data = data + 1
+   print(string.format("set to %d", data))
+   txn.http:res_set_status(200 + data)
+   txn:set_priv(data)
+end)
+
+core.register_service("luahttpservice", "http", function(applet)
+   local response = "?"
+   local responsestatus = 200
+   if applet.path == "/setport" then
+   vtc_port1 = applet.headers["vtcport1"][0]
+   response = "OK"
+   end
+   if applet.path == "/svr_healthcheck" then
+   healthcheckcounter = healthcheckcounter + 1
+   if healthcheckcounter < 2 or healthcheckcounter > 6 then
+   responsestatus = 403
+   end
+   end
+
+   applet:set_status(responsestatus)
+   if applet.path == "/checkMailCounters" then
+   response = "MailCounters"
+   applet:add_header("mailsreceived", mailsreceived)
+   applet:add_header("mailconnectionsmade", mailconnectionsmade)
+   end
+   applet:start_response()
+   applet:send(response)
+end)
+
+core.register_service("fakeserv", "http", function(applet)
+   applet:set_status(200)
+   applet:start_response()
+end)
+
+function RecieveAndCheck(applet, expect)
+   data = applet:getline()
+   if data:sub(1,expect:len()) ~= expect then
+   core.Info("Expected: "..expect.." but 
got:"..data:sub(1,expect:len()))
+   applet:send("Expected: "..expect.." but got:"..data.."\r\n")
+   return false
+   end
+   return true
+end
+
+core.register_service("mailservice", "tcp", function(applet)
+   core.Info("# Mailservice Called #")
+   mailconnectionsmade = mailconnectionsmade + 1
+   applet:send("220 Welcome\r\n")
+   local data
+
+   if RecieveAndCheck(applet, "EHLO") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "MAIL FROM:") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "RCPT TO:") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "DATA") == false then
+   return
+   end
+   applet:send("354 OK\r\n")
+   core.Info(" Send your mailbody")
+   local endofmail = false
+   local subject = ""
+   while endofmail ~= true do
+   data = apple

d94f877 causes timeout in a basic connection test 1.9-dev11_d94f877

2018-12-17 Thread PiBa-NL

Hi List, Christopher,

Seems like d94f877 causes timeout in a pretty 'basic' connection test 
that transfers a little bit of data .?

Or at least attached test fails to complete for me..

#    top  TEST ./PB-TEST/basic_connection.vtc TIMED OUT (kill -9)
#    top  TEST ./PB-TEST/basic_connection.vtc FAILED (120.236) signal=9

Please can you take a look :) Thanks in advance.

Regards,

PiBa-NL (Pieter)

# Checks a simple request
varnishtest "Checks a simple request"
feature ignore_unknown_macro

server s1 {
rxreq
txresp -bodylen 42202
} -start

haproxy h1 -conf {
  global
stats socket /tmp/haproxy.socket level admin

  defaults
mode http
log global
option httplog
timeout connect 3s
timeout client  4s
timeout server  4s

  frontend fe1
bind "fd@${fe_1}"
default_backend b1

  backend b1
http-reuse never 
server srv1 ${s1_addr}:${s1_port} 
#pool-max-conn 0

} -start

shell {
HOST=${h1_fe_1_addr}
if [ "${h1_fe_1_addr}" = "::1" ] ; then
HOST="\[::1\]"
fi
curl -v -k "http://$HOST:${h1_fe_1_port}/CurlTest1";
} -run

server s1 -wait

Re: corruption of data with compression in 1.9-dev10

2018-12-17 Thread PiBa-NL

Hi Christopher,

Fix confirmed.
 top   2.5 shell_out|File1 all OK
 top   2.5 shell_out|File2 all OK
 top   2.5 shell_out|File3 all OK
Thank you!

Regards,
PiBa-NL (Pieter)



Re: regtests - with option http-use-htx

2018-12-15 Thread PiBa-NL

Hi Willy,
Op 15-12-2018 om 17:06 schreef Willy Tarreau:

Hi Pieter,

On Sat, Dec 15, 2018 at 04:52:10PM +0100, PiBa-NL wrote:

Hi List, Willy,

Trying to run some existing regtests with added option: option http-use-htx

Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 expectation
for syslog. This surely needs a closer look?
#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

It looks exactly like another issue we've found when a content-length
is missing but the close is not seen, which is the same in your case
with the first proxy returning the 503 error page by default. Christopher
told me he understands what's happening in this situation (at least for
the one we've met), I'm CCing him in case this report fuels this thoughts.

Ok thanks.



 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but then the
'old' expectation seems wrong, and the bug is the case without htx?

 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 gpc0=0
gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 http_req_rate(1)=1
http_err_cnt=0 http_err_rate(1)=0
 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024,
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"

Hmmm here I think we're really hitting corner cases depending on whether
the tracked counters are released before or after the logs are emitted.
In the case of htx, the logs are emitted slightly later than before,
which may induce this. Quite honestly I'd be inclined to set use=[01]
here in the regex to cover the race condition that exists in both cases,
as there isn't any single good value. Christopher, are you also OK with
this ? I can do the patch if you're OK.
Its not about emitting logs, its querying the stats admin socket, and 
even with a added 'delay 2' before doing so the results seem to show the 
same difference with/without htx. I don't think its a matter of 'timing' .?


Regards,

PiBa-NL (Pieter)

**   c1    0.0 === expect resp.status == 503
 c1    0.0 EXPECT resp.status (503) == "503" match
***  c1    0.0 closing fd 7
**   c1    0.0 Ending
**   top   0.0 === delay 2
***  top   0.0 delaying 2 second(s)
**   top   2.1 === haproxy h1 -cli {
**   h1    2.1 CLI starting
**   h1    2.1 CLI waiting
***  h1    2.1 CLI connected fd 7 from ::1 26202 to ::1 26153
**   h1    2.1 === send "show table http1"
 h1    2.1 CLI send|show table http1
**   h1    2.1 === expect ~ "table: http1, type: ip, size:1024, 
used:(0|1\\n0x[...
***  h1    2.1 debug|0001:GLOBAL.accept(0005)=000b from [::1:26202] 
ALPN=

 h1    2.1 CLI connection normally closed
***  h1    2.1 CLI closing fd 7
***  h1    2.1 debug|0001:GLOBAL.srvcls[adfd:]
***  h1    2.1 debug|0001:GLOBAL.clicls[adfd:]
***  h1    2.1 debug|0001:GLOBAL.closed[adfd:]
 h1    2.1 CLI recv|# table: http1, type: ip, size:1024, used:1
 h1    2.1 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    2.1 CLI recv|
 h1    2.1 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"





regtests - with option http-use-htx

2018-12-15 Thread PiBa-NL

Hi List, Willy,

Trying to run some existing regtests with added option: option http-use-htx

Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 
expectation for syslog. This surely needs a closer look?

#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but then 
the 'old' expectation seems wrong, and the bug is the case without htx?


 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"


Regards,

PiBa-NL (Pieter)

#commit b406b87
# BUG/MEDIUM: connection: don't store recv() result into trash.data
#
# Cyril Bonté discovered that the proxy protocol randomly fails since
# commit 843b7cb ("MEDIUM: chunks: make the chunk struct's fields match
# the buffer struct"). This is because we used to store recv()'s return
# code into trash.data which is now unsigned, so it never compares as
# negative against 0. Let's clean this up and test the result itself
# without storing it first.

varnishtest "PROXY protocol random failures"

feature ignore_unknown_macro

syslog Slog_1 -repeat 8 -level info {
recv
expect ~ "Connect from .* to ${h1_ssl_addr}:${h1_ssl_port}"
recv
expect ~ "ssl-offload-http/http .* \"POST /[1-8] HTTP/2\\.0\""
} -start

haproxy h1 -conf {
global
nbproc 4
nbthread 4
tune.ssl.default-dh-param 2048
stats bind-process 1
log ${Slog_1_addr}:${Slog_1_port} len 2048 local0 debug err

defaults
mode http
timeout client 1s
timeout server 1s
timeout connect 1s
log global

option http-use-htx

listen http
bind-process 1
bind unix@${tmpdir}/http.socket accept-proxy name ssl-offload-http
option forwardfor

listen ssl-offload-http
option httplog
bind-process 2-4
bind "fd@${ssl}" ssl crt ${testdir}/common.pem ssl no-sslv3 alpn 
h2,http/1.1
server http unix@${tmpdir}/http.socket send-proxy
} -start


shell {
HOST=${h1_ssl_addr}
if [ "$HOST" = "::1" ] ; then
HOST="\[::1\]"
fi
for i in 1 2 3 4 5 6 7 8 ; do
urls="$urls https://$HOST:${h1_ssl_port}/$i";
done
curl -i -k -d 'x=x' $urls & wait $!
}

syslog Slog_1 -wait
# commit 3e60b11
# BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
#
# When using table_* converters ref_cnt was incremented
# and never decremented causing entries to not expire.
#
# The root cause appears to be that stktable_lookup_key()
# was called within all sample_conv_table_* functions which was
# incrementing ref_cnt and not decrementing after completion.
#
# Added stktable_release() to the end of each sample_conv_table_*
# function and reworked the end logic to ensure that ref_cnt is
# always decremented after use.
#
# This should be backported to 1.8

varnishtest "stick-tables: Test expirations when used with table_*"

# As some macros for haproxy are used in this file, this line is mandatory.
feature ignore_unknown_macro

# Do nothing.
server s1 {
} -start

haproxy h1 -conf {
# Configuration file of 'h1' haproxy instance.
defaults
mode   http
timeout connect 5s
timeout server  30s
timeout client  30s
option http-use-htx

frontend http1
bind "fd@${my_frontend_fd}"
stick-table size 1k expire 1ms type ip store 
conn_rate(10s),http_req_cnt,http_err_cnt,http_req_rate(10s),http_err_rate(10s),gpc0,gpc0_rate(10s),gpt0
http-request track-sc0 req.hdr(X-Forwarded-For)
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_http_req_cnt(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_trackers(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),in_table(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_bytes_in_rate(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwa

Re: Quick update on 1.9

2018-12-15 Thread PiBa-NL

Hi Willy,

Op 15-12-2018 om 6:15 schreef Willy Tarreau:

- Compression corrupts data(Christopher is investigating):
https://www.mail-archive.com/haproxy@formilux.org/msg32059.html

This one was fixed, he had to leave quickly last evening so he
couldn't respond, but it was due to some of my changes to avoid
copies, I failed to grasp some corner cases of htx.

Could it be it is not fixed/committed in the git repository? (b.t.w. i 
don't use htx in the vtc testfile ..).
As "6e0d8ae BUG/MINOR: mworker: don't use unitialized mworker_proc 
struct master" seems to be the latest commit and the .vtc file still 
produces files with different hashes for the 3 curl commands for me.


Besides that and as usual thanks for your elaborate response on all the 
other subjects :).


Regards,

PiBa-NL (Pieter)



Re: Quick update on 1.9

2018-12-14 Thread PiBa-NL

Hi Willy,

Op 14-12-2018 om 22:32 schreef Willy Tarreau:

if we manage to get haproxy.org to work reasonably stable this week-
end, it will be a sign that we can release it.


There are still several known issues that should be addressed before 
'release' imho.


- Compression corrupts data(Christopher is investigating): 
https://www.mail-archive.com/haproxy@formilux.org/msg32059.html
- Dispatch server crashes haproxy (i found it today): 
https://www.mail-archive.com/haproxy@formilux.org/msg32078.html
- stdout logging makes syslog logging fail (i mentioned it before, but i 
thought lets 'officially' re-report it now): 
https://www.mail-archive.com/haproxy@formilux.org/msg32079.html
- As you mention haproxy serving the haproxy.org website apparently 
crashed several times today when you tried a recent build.. I think a 
week of running without a single crash would be a better indicator than 
a single week-end that a release could be imminent.?.
- Several of the '/checks/' regtests don't work. Might be a problem with 
varnishtest though, not sure.. But you already discovered that.

And thats just the things i am aware off a.t.m..

I'm usually not 'scared' to run a -dev version on my production box for 
a while and try a few new experimental features that seem useful to me 
over a weekend, but i do need to have the idea that it will 'work' as 
good as the version i update from, and to me it just doesn't seem there 
yet. (i would really like the compression to be functional before i try 
again..)


So with still several known bugs to solve imho its not yet a good time 
to release it as a 'stable' version already in few days time.?. Or did i 
misunderstand the 'sign' to release, is it one of several signs that 
needs to be checked.?. I think a -dev11 or perhaps a -RC if someone 
likes that term, would probably be more appropriate, before distro's 
start including the new release expecting stability while actually 
bringing a seemingly largish potential of breaking some features that 
used to work. So even current new commits are still introducing new 
breakage, while shortly before release i would expect mostly little 
fixes to issues to get committed. That 'new' features arn't 100% stable, 
that might not be a blocker. But existing features that used to work 
properly should imho not get released in a broken state..


my 2 cent.

Regards,
PiBa-NL (Pieter)




stdout logging makes syslog logging fail.. 1.9-dev10-6e0d8ae

2018-12-14 Thread PiBa-NL

Hi List, Willy,

stdout logging makes syslog logging fail.. regtest that reproduces the 
issue attached.


Attached test (a modification of /log/b0.vtc) fails, by just adding 
a stdout logger: ***  h1    0.0 debug|[ALERT] 348/000831 (51048) : 
sendmsg()/writev() failed in logger #2: Socket operation on non-socket 
(errno=38), which apparently modifies the syslog behavior.?


Tested with version 1.9-dev10-6e0d8ae, but i think it never worked since 
stdout logging was introduced.


Regards,

PiBa-NL (Pieter)

# commit d02286d
# BUG/MINOR: log: pin the front connection when front ip/ports are logged
#
# Mathias Weiersmueller reported an interesting issue with logs which Lukas
# diagnosed as dating back from commit 9b061e332 (1.5-dev9). When front
# connection information (ip, port) are logged in TCP mode and the log is
# emitted at the end of the connection (eg: because %B or any log tag
# requiring LW_BYTES is set), the log is emitted after the connection is
# closed, so the address and ports cannot be retrieved anymore.
#
# It could be argued that we'd make a special case of these to immediately
# retrieve the source and destination addresses from the connection, but it
# seems cleaner to simply pin the front connection, marking it "tracked" by
# adding the LW_XPRT flag to mention that we'll need some of these elements
# at the last moment. Only LW_FRTIP and LW_CLIP are affected. Note that after
# this change, LW_FRTIP could simply be removed as it's not used anywhere.
#
# Note that the problem doesn't happen when using %[src] or %[dst] since
# all sample expressions set LW_XPRT.

varnishtest "Wrong ip/port logging"
feature ignore_unknown_macro

server s1 {
rxreq
txresp
} -start

syslog Slg_1 -level notice {
recv
recv
recv info
expect ~ 
\"dip\":\"${h1_fe_1_addr}\",\"dport\":\"${h1_fe_1_port}.*\"ts\":\"[cC]D\",\"
} -start

haproxy h1 -conf {
global
log stdout format short daemon
log ${Slg_1_addr}:${Slg_1_port} local0

defaults
log global
timeout connect 3000
timeout client 1
timeout server  1

frontend fe1
bind "fd@${fe_1}"
mode tcp
log-format 
{\"dip\":\"%fi\",\"dport\":\"%fp\",\"c_ip\":\"%ci\",\"c_port\":\"%cp\",\"fe_name\":\"%ft\",\"be_name\":\"%b\",\"s_name\":\"%s\",\"ts\":\"%ts\",\"bytes_read\":\"%B\"}
default_backendbe_app

backend be_app
server app1 ${s1_addr}:${s1_port}
} -start

client c1 -connect ${h1_fe_1_sock} {
txreq -url "/"
delay 0.02
} -run

syslog Slg_1 -wait



crash with regtest: /reg-tests/connection/h00001.vtc after commit f157384

2018-12-14 Thread PiBa-NL

Hi List, Willy,

Current 1.9-dev master ( 6e0d8ae ) crashes with regtest: 
/reg-tests/connection/h1.vtc stack below, it fails after commit f157384.


Can someone check? Thanks.

Regards,

PiBa-NL (Pieter)

Program terminated with signal 11, Segmentation fault.
#0  0x0057f34f in connect_server (s=0x802616500) at 
src/backend.c:1384

1384 HA_ATOMIC_ADD(&srv->counters.connect, 1);
(gdb) bt full
#0  0x0057f34f in connect_server (s=0x802616500) at 
src/backend.c:1384

    cli_conn = (struct connection *) 0x8026888c0
    srv_conn = (struct connection *) 0x802688a80
    old_conn = (struct connection *) 0x0
    srv_cs = (struct conn_stream *) 0x8027b8180
    srv = (struct server *) 0x0
    reuse = 0
    reuse_orphan = 0
    err = 0
    i = 5
#1  0x004a8acc in sess_update_stream_int (s=0x802616500) at 
src/stream.c:928

    conn_err = 8
    srv = (struct server *) 0x0
    si = (struct stream_interface *) 0x802616848
    req = (struct channel *) 0x802616510
#2  0x004a37c2 in process_stream (t=0x80265c320, 
context=0x802616500, state=257) at src/stream.c:2302

    srv = (struct server *) 0x0
    s = (struct stream *) 0x802616500
    sess = (struct session *) 0x8027be000
    rqf_last = 9469954
    rpf_last = 2147483648
    rq_prod_last = 7
    rq_cons_last = 0
    rp_cons_last = 7
    rp_prod_last = 0
    req_ana_back = 0
    req = (struct channel *) 0x802616510
    res = (struct channel *) 0x802616570
    si_f = (struct stream_interface *) 0x802616808
    si_b = (struct stream_interface *) 0x802616848
#3  0x005e9da7 in process_runnable_tasks () at src/task.c:432
    t = (struct task *) 0x80265c320
    state = 257
    ctx = (void *) 0x802616500
    process = (struct task *(*)(struct task *, void *, unsigned 
short)) 0x4a0480 

    t = (struct task *) 0x80265c320
    max_processed = 200
#4  0x00511592 in run_poll_loop () at src/haproxy.c:2620
    next = 0
    exp = 0
#5  0x0050dc00 in run_thread_poll_loop (data=0x802637080) at 
src/haproxy.c:2685

---Type  to continue, or q  to quit---
    start_lock = {lock = 0, info = {owner = 0, waiters = 0, 
last_location = {function = 0x0, file = 0x0, line = 0}}}

    ptif = (struct per_thread_init_fct *) 0x92ee30
    ptdf = (struct per_thread_deinit_fct *) 0x0
#6  0x0050a2b6 in main (argc=4, argv=0x7fffea48) at 
src/haproxy.c:3314

    tids = (unsigned int *) 0x802637080
    threads = (pthread_t *) 0x802637088
    i = 1
    old_sig = {__bits = 0x7fffe770}
    blocked_sig = {__bits = 0x7fffe780}
    err = 0
    retry = 200
    limit = {rlim_cur = 4042, rlim_max = 4042}
    errmsg = 0x7fffe950 ""
    pidfd = -1
Current language:  auto; currently minimal


haproxy -vv
HA-Proxy version 1.9-dev10-6e0d8ae 2018/12/14
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -DDEBUG_THREAD -DDEBUG_MEMORY -pipe -g -fstack-protector 
-fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow -Wno-address-of-packed-member 
-Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_ACCEPT4=1 USE_REGPARM=1 
USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE
  h2 : mode=HTX    side=FE|BE
    : mode=HTX    side=FE|BE
    : mode=TCP|HTTP   side=FE|BE

Available filters :
    [SPOE] spoe
    [COMP] compression
    [CACHE] cache
    [TRACE] trace

# commit d02286d
# BUG/MINOR: log: pin the front 

Re: corruption of data with compression in 1.9-dev10

2018-12-12 Thread PiBa-NL

Hi Christopher,

Op 12-12-2018 om 13:49 schreef Christopher Faulet:

Le 12/12/2018 à 12:07, Pi Ba a écrit :
Found the issue on the 10th (I think commit 56b0348).. so yesterday's 
commit isn't the (only) problem.. tested with commit 0007d0a the 
issue also happens. Reverting only below mentioned commit I can't 
easily do atm. I'll check more closely this evening.




Hum, I don't understand, the commit 56b0348 fixes a bug in the H2 
multiplexer. you don't use it in your test-case. It should be totally 
unrelated.


It wasn't that specific commit that broke my test, its just that i 
picked that one because i could... It just that was they day i started 
to try 1.9-X on my production environment..


Having spend a bit more time checking/compiling various commits i found 
that this is the commit that 'broke' the testcase that was attached 
(some of the contents is misplaced/repeated..):

http://git.haproxy.org/?p=haproxy.git;a=commit;h=d247be0620c35ea0a43074fd88c6a520629c1823

p.s. I also send you off-list the full output of the test with the added 
configuration options: filter trace name BEFORE / filter compression / 
filter trace name AFTER .. (resulting in a 90MB log..)


Regards,

PiBa-NL (Pieter)




corruption of data with compression in 1.9-dev10

2018-12-11 Thread PiBa-NL

Hi List,

Didn't have time yet to bisect when it went wrong. But attached testfile 
produces the following output after 3 curl requests at different speeds, 
this seems to trigger a problem as the hash of the downloaded content is 
nolonger the same as it should be, (in my actual environment its a 2MB 
javascript file that comes from a iis server behind haproxy.). Took 
already a few hours more than desired to come up with a seemingly 
reliable reproduction.
1.9-dev10 is the first one i put on my production environment as i think 
release is imminent so it 'should' be pretty stable ;), (yes i know..i 
shouldn't assume..) before it was using 1.8.14.. So was quick to revert 
to that 1.8 again :).


Using these settings:

    compression algo gzip
    compression type text/html text/plain application/json 
application/javascript

    compression offload
When these compression settings are disabled, it completes successfully..

 top   2.4 shell_cmd|  exit 1
 top   2.4 shell_cmd|    fi
 top   2.5 shell_out|File1 all OK
 top   2.5 shell_out|File2 not OK 7798551c02a37ce89c77fc18fc415e5b
 top   2.5 shell_out|File3 not OK 3146c4c9fce4da750558bfd9387ffc3b
 top   2.5 shell_status = 0x0001
 top   2.5 shell_exit not as expected: got 0x0001 wanted 0x
*    top   2.5 RESETTING after ./PB-TEST/ulticompres/b5.vtc
**   h1    2.5 Reset and free h1 haproxy 51853
**   h1    2.5 Wait
**   h1    2.5 Stop HAproxy pid=51853
 h1    2.5 STDOUT poll 0x11
 h1    2.5 Kill(2)=0: No error: 0
**   h1    2.6 WAIT4 pid=51853 status=0x0002 (user 0.253496 sys 0.00)
*    top   2.6 TEST ./PB-TEST/ulticompres/b5.vtc FAILED
#    top  TEST ./PB-TEST/ulticompres/b5.vtc FAILED (2.581) exit=2

haproxy -v
HA-Proxy version 1.9-dev10-3815b22 2018/12/11
Copyright 2000-2018 Willy Tarreau 

Can anyone confirm? Or perhaps even fix ;) Ill try and dig a little more 
tomorrow evening :).


Thanks in advance,
PiBa-NL (Pieter)


local data = "abcdefghijklmnopqrstuvwxyz"
local responseblob = ""
math.randomseed(1)
for i = 1,1 do
responseblob = responseblob .. "\r\n" .. i .. data:sub(1, 
math.floor(math.random(4,26)))
end

http01applet = function(applet) 
  local response = responseblob
  applet:set_status(200) 
  applet:add_header("Content-Type", "application/javascript") 
  applet:add_header("Content-Length", string.len(response)*10) 
  applet:start_response() 
  for i = 1,10 do
applet:send(response) 
  end
end 

core.register_service("fileloader-http01", "http", http01applet)
# Checks that compression doesnt cause corruption..

varnishtest "Compression validation"
feature ignore_unknown_macro

haproxy h1 -conf {
global
#   log stdout format short daemon 
lua-load${testdir}/b5.lua

defaults
modehttp
log global
option  httplog

frontend main-https
bind"fd@${fe1}" ssl crt ${testdir}/common.pem
compression algo gzip
compression type text/html text/plain application/json 
application/javascript
compression offload
use_backend TestBack  if  TRUE

backend TestBack
server  LocalSrv ${h1_fe2_addr}:${h1_fe2_port}

listen fileloader
mode http
bind "fd@${fe2}"
http-request use-service lua.fileloader-http01
} -start

shell {
HOST=${h1_fe1_addr}
if [ "${h1_fe1_addr}" = "::1" ] ; then
HOST="\[::1\]"
fi
curl --compressed -k "https://$HOST:${h1_fe1_port}"; -o 
${tmpdir}/outputfile1.bin
curl --compressed -k "https://$HOST:${h1_fe1_port}"; -o 
${tmpdir}/outputfile3.bin --limit-rate 300K
curl --compressed -k "https://$HOST:${h1_fe1_port}"; -o 
${tmpdir}/outputfile2.bin --limit-rate 500K
} -run

shell {
md5sum=$(md5 -q ${tmpdir}/outputfile1.bin)
if [ "$md5sum" =  "f0d51d274ebc7696237efec272a38c41" ]
then
  echo "File1 all OK"
else
  echo "File1 not OK $md5sum "
  testfailed=1
fi

md5sum=$(md5 -q ${tmpdir}/outputfile2.bin)
if [ "$md5sum" =  "f0d51d274ebc7696237efec272a38c41" ]
then
  echo "File2 all OK"
else
  echo "File2 not OK $md5sum "
  testfailed=1
fi

md5sum=$(md5 -q ${tmpdir}/outputfile3.bin)
if [ "$md5sum" =  "f0d51d274ebc7696237efec272a38c41" ]
then
  echo "File3 all OK"
else
  echo "File3 not OK $md5sum "
  testfailed=1
fi

if [ -n "$testfailed" ]; then
  exit 1
fi
} -run

Re: regtest failure for /log/b00000.vtc, tcp health-check makes and closes a connection to s1 server without valid http-request

2018-12-08 Thread PiBa-NL

Hi Willy,

Op 8-12-2018 om 23:49 schreef Willy Tarreau:

Hi Pieter,

Just let me know which patch you prefer me to apply, I'm fine with
your options.
The patch i prefer would be 'c', to remove the 'check' from the server 
line. As it simply removes all possibly check related 'issues'.

Willy

Regards,

PiBa-NL (Pieter)




regtest failure for /log/b00000.vtc, tcp health-check makes and closes a connection to s1 server without valid http-request

2018-12-08 Thread PiBa-NL

Hi List, Willy,

The regtest  /reg-tests/log/b0.vtc, is failing for me as shown 
below, and attached:


***  s1    0.0 accepted fd 5 127.0.0.1 29538
**   s1    0.0 === rxreq
 s1    0.0 HTTP rx failed (fd:5 read: Connection reset by peer)
***  c1    0.0 closing fd 8
**   c1    0.0 Ending
*    top   0.0 RESETTING after ./reg-tests/log/b0.vtc

This happens because the health-check makes a tcp connection, then 
disconnects, but s1 server expects a http-request.


So to fix this, i propose to apply 1 out of 3 possible fixes i could 
imagine, each one does fix the test when executed.


a- use s2 server specifically for the tcp health-check
b- use a option httpchk, and repeat s1 server twice
c- remove the health-check completely.

I think option C is probably the cleanest and most fail-safe way. And am 
'pretty sure' that the health-check isn't actually needed to reproduce 
the original issue. Anyhow health-checks could be a source of random 
test-failures when the system is really slow it might need 2 checks 
during a test, and normal varnishtest server's only processes 1 
connection unless specified differently, or using a 's0 -dispatch'.


Or on second (fourth? / last) thought, is there a bug somewhere as the 
tcp-health-check 'should' abort the connection even before the 
3way-tcp-handshake is completed? And as such s1 should not see that 
first connection?? (Is that also possible/valid for a FreeBSD system? Or 
would that be a linux trick?)


Regards,

PiBa-NL (Pieter)

 top   0.0 extmacro def 
pwd=/usr/ports/net/haproxy-devel/work/haproxy-eb2bbba
 top   0.0 extmacro def localhost=127.0.0.1
 top   0.0 extmacro def bad_backend=127.0.0.1 14768
 top   0.0 extmacro def bad_ip=192.0.2.255
 top   0.0 macro def 
testdir=/usr/ports/net/haproxy-devel/work/haproxy-eb2bbba/./reg-tests/log
 top   0.0 macro def 
tmpdir=/tmp/2018-12-08_21-44-28.0VTEYU/vtc.97431.55aa9ba1
*top   0.0 TEST ./reg-tests/log/b0.vtc starting
**   top   0.0 === varnishtest "Wrong ip/port logging"
*top   0.0 TEST Wrong ip/port logging
**   top   0.0 === feature ignore_unknown_macro
**   top   0.0 === server s1 {
**   s10.0 Starting server
 s10.0 macro def s1_addr=127.0.0.1
 s10.0 macro def s1_port=45615
 s10.0 macro def s1_sock=127.0.0.1 45615
*s10.0 Listen on 127.0.0.1 45615
**   top   0.0 === syslog Slg_1 -level notice {
**   Slg_1  0.0 Starting syslog server
 Slg_1  0.0 macro def Slg_1_addr=127.0.0.1
 Slg_1  0.0 macro def Slg_1_port=16363
 Slg_1  0.0 macro def Slg_1_sock=127.0.0.1 16363
*Slg_1  0.0 Bound on 127.0.0.1 16363
**   s10.0 Started on 127.0.0.1 45615
**   top   0.0 === haproxy h1 -conf {
**   Slg_1  0.0 Started on 127.0.0.1 16363 (level: 5)
**   Slg_1  0.0 === recv
 h10.0 macro def h1_cli_sock=::1 58187
 h10.0 macro def h1_cli_addr=::1
 h10.0 macro def h1_cli_port=58187
 h10.0 setenv(cli, 6)
 h10.0 macro def h1_fe_1_sock=::1 33423
 h10.0 macro def h1_fe_1_addr=::1
 h10.0 macro def h1_fe_1_port=33423
 h10.0 setenv(fe_1, 7)
 h10.0 conf|global
 h10.0 conf|\tstats socket 
/tmp/2018-12-08_21-44-28.0VTEYU/vtc.97431.55aa9ba1/h1/stats.sock level admin 
mode 600
 h10.0 conf|stats socket "fd@${cli}" level admin
 h10.0 conf|
 h10.0 conf|global
 h10.0 conf|log 127.0.0.1:16363 local0
 h10.0 conf|
 h10.0 conf|defaults
 h10.0 conf|log global
 h10.0 conf|timeout connect 3000
 h10.0 conf|timeout client 1
 h10.0 conf|timeout server  1
 h10.0 conf|
 h10.0 conf|frontend fe1
 h10.0 conf|bind "fd@${fe_1}"
 h10.0 conf|mode tcp
 h10.0 conf|log-format 
{\"dip\":\"%fi\",\"dport\":\"%fp\",\"c_ip\":\"%ci\",\"c_port\":\"%cp\",\"fe_name\":\"%ft\",\"be_name\":\"%b\",\"s_name\":\"%s\",\"ts\":\"%ts\",\"bytes_read\":\"%B\"}
 h10.0 conf|default_backendbe_app
 h10.0 conf|
 h10.0 conf|backend be_app
 h10.0 conf|server app1 127.0.0.1:45615 check
**   h10.0 haproxy_start
 h10.0 opt_worker 0 opt_daemon 0 opt_check_mode 0
 h10.0 argv|exec 
/usr/ports/net/haproxy-devel/work/haproxy-eb2bbba/haproxy -d  -f 
/tmp/2018-12-08_21-44-28.0VTEYU/vtc.97431.55aa9ba1/h1/cfg 
 h10.0 XXX 9 @586
***  h10.0 PID: 97435
 h10.0 macro def h1_pid=97435
 h10.0 macro def 
h1_name=/tmp/2018-12-08_21-44-28.0VTEYU/vtc.97431.55aa9ba1/h1
**   top   0.0 === client c1 -connect ${h1_fe_1_sock} {
**   c10.0 Starting client
**   c10.0 Waiting for client
***  c10.0 Connect to ::1 33423
***  c10.0 connected f

[PATCH] REGTEST/MINOR: skip seamless-reload test with abns socket on freebsd

2018-12-08 Thread PiBa-NL

Hi List, Willy,

Added below to the reg-tests/seamless-reload/b0.vtc, I'm sure there 
are other targets to be excluded, but I'm not sure which. Or should i 
have listed 'all' targets except the 5 linux22 - linux2628 specific 
versions to be excluded? Only thing I know for sure it doesn't work on 
FreeBSD currently.. I can re-spin the patch with all non-linux targets 
listed if desired.


# expose-fd is available starting at version 1.8
#REQUIRE_VERSION=1.8
# abns@ sockets are not available on freebsd
#EXCLUDE_TARGETS=freebsd

Regards,
PiBa-NL (Pieter)

From 3ef9f57b274b350b69d747f8f92fedfb8d283092 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sat, 8 Dec 2018 20:51:16 +0100
Subject: [PATCH] REGTEST/MINOR: skip seamless-reload test with abns socket on
 freebsd

abns sockets are not available on freebsd as such mark the test to skip
this OS and expose-fd was implemented first in 1.8 so require that
---
 reg-tests/seamless-reload/b0.vtc | 5 +
 1 file changed, 5 insertions(+)

diff --git a/reg-tests/seamless-reload/b0.vtc 
b/reg-tests/seamless-reload/b0.vtc
index 498e0c61..8f7acf64 100644
--- a/reg-tests/seamless-reload/b0.vtc
+++ b/reg-tests/seamless-reload/b0.vtc
@@ -11,6 +11,11 @@
 varnishtest "Seamless reload issue with abns sockets"
 feature ignore_unknown_macro
 
+# expose-fd is available starting at version 1.8
+#REQUIRE_VERSION=1.8
+# abns@ sockets are not available on freebsd
+#EXCLUDE_TARGETS=freebsd
+
 haproxy h1 -W -conf {
   global
 stats socket ${tmpdir}/h1/stats level admin expose-fd listeners
-- 
2.18.0.windows.1



[PATCH] REGTEST/MINOR: remove double body specification for server txresp

2018-12-08 Thread PiBa-NL

Hi List,

I'm getting a regtest failure for the h1.vtc test, seemingly because 
varnishtest doesn't like the definition.


Error (full log attached):
 s1    0.0 http[23] |be-hdr-crc: 3634102538
 s1    0.0 bodylen = 0
**   s1    0.0 === txresp \
 s1    0.0 Assert error in http_tx_parse_args(), vtc_http.c line 
870:  Condition(body == nullbody) not true.


***  h1    1.0 debug|:be.srvcls[000b:adfd]
***  h1    1.0 debug|:be.clicls[000b:adfd]

This comes from the following definition:

    txresp \
      -status 234 \
      -hdr "hdr1: val1" \
      -hdr "hdr2:  val2a" \
      -hdr "hdr2:    val2b" \
      -hdr "hdr3:  val3a, val3b" \
      -hdr "hdr4:" \
      -bodylen 14 \
      -body "This is a body"

Where both -bodylen and -body are defined, while it seems to me these 2 
settings are 'conflicting', as the '-bodylen' generates a kinda random 
body content. While '-body' defines the exact string to send as a body..


Seems to me that the bodylen should be removed? Patch that does that 
attached.


Regards,

PiBa-NL

From e786be564e7dca1e3b347b6cc9e0af05c85e975b Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sat, 8 Dec 2018 19:48:37 +0100
Subject: [PATCH] REGTEST/MINOR: remove double body specification for server
 txresp

fix http-rules/h0.vtc / http-rules/h0.vtc as both 'bodylen' and
'body' are specified, these settings conflict with each other as they
both generate/present the body to send.
---
 reg-tests/http-rules/h0.vtc | 1 -
 reg-tests/http-rules/h1.vtc | 1 -
 2 files changed, 2 deletions(-)

diff --git a/reg-tests/http-rules/h0.vtc b/reg-tests/http-rules/h0.vtc
index 25388f8a..aedb41ff 100644
--- a/reg-tests/http-rules/h0.vtc
+++ b/reg-tests/http-rules/h0.vtc
@@ -25,7 +25,6 @@ server s1 {
  -hdr "hdr2:val2b" \
  -hdr "hdr3:  val3a, val3b" \
  -hdr "hdr4:" \
- -bodylen 14 \
  -body "This is a body"
 
expect req.method == "GET"
diff --git a/reg-tests/http-rules/h1.vtc b/reg-tests/http-rules/h1.vtc
index 80522a1b..ca86f1b9 100644
--- a/reg-tests/http-rules/h1.vtc
+++ b/reg-tests/http-rules/h1.vtc
@@ -24,7 +24,6 @@ server s1 {
  -hdr "hdr2:val2b" \
  -hdr "hdr3:  val3a, val3b" \
  -hdr "hdr4:" \
- -bodylen 14 \
  -body "This is a body"
 
expect req.method == "GET"
-- 
2.18.0.windows.1

 top   0.0 extmacro def 
pwd=/usr/ports/net/haproxy-devel/work/haproxy-eb2bbba
 top   0.0 extmacro def localhost=127.0.0.1
 top   0.0 extmacro def bad_backend=127.0.0.1 28101
 top   0.0 extmacro def bad_ip=192.0.2.255
 top   0.0 macro def 
testdir=/usr/ports/net/haproxy-devel/work/haproxy-eb2bbba/./reg-tests/http-rules
 top   0.0 macro def 
tmpdir=/tmp/2018-12-08_19-21-06.FYVFcT/vtc.86314.04fe4024
*top   0.0 TEST ./reg-tests/http-rules/h1.vtc starting
**   top   0.0 === varnishtest "Composite HTTP manipulation test (H1 and H2 
cle...
*top   0.0 TEST Composite HTTP manipulation test (H1 and H2 clear to H1 
clear)
**   top   0.0 === feature ignore_unknown_macro
**   top   0.0 === server s1 {
**   s10.0 Starting server
 s10.0 macro def s1_addr=127.0.0.1
 s10.0 macro def s1_port=23305
 s10.0 macro def s1_sock=127.0.0.1 23305
*s10.0 Listen on 127.0.0.1 23305
**   top   0.0 === haproxy h1 -conf {
**   s10.0 Started on 127.0.0.1 23305
***  s10.0 Iteration 0
 h10.0 macro def h1_cli_sock=::1 23306
 h10.0 macro def h1_cli_addr=::1
 h10.0 macro def h1_cli_port=23306
 h10.0 setenv(cli, 5)
 h10.0 macro def h1_feh1_sock=::1 23307
 h10.0 macro def h1_feh1_addr=::1
 h10.0 macro def h1_feh1_port=23307
 h10.0 setenv(feh1, 6)
 h10.0 macro def h1_feh2_sock=::1 23308
 h10.0 macro def h1_feh2_addr=::1
 h10.0 macro def h1_feh2_port=23308
 h10.0 setenv(feh2, 7)
 h10.0 conf|global
 h10.0 conf|\tstats socket 
/tmp/2018-12-08_19-21-06.FYVFcT/vtc.86314.04fe4024/h1/stats.sock level admin 
mode 600
 h10.0 conf|stats socket "fd@${cli}" level admin
 h10.0 conf|
 h10.0 conf|defaults
 h10.0 conf|\tmode http
 h10.0 conf|\ttimeout connect 1s
 h10.0 conf|\ttimeout client  1s
 h10.0 conf|\ttimeout server  1s
 h10.0 conf|
 h10.0 conf|frontend fe
 h10.0 conf|\tbind "fd@${feh1}"
 h10.0 conf|\tbind "fd@${feh2}" proto h2
 h10.0 conf|
 h10.0 conf|\t requests
 h10.0 conf|\thttp-request set-var(req.method) method
 h10.0 conf|\thttp-request set-var(req.uri)url
 h10.0 conf|\thtt

Re: behavior change when enabling HTX (in regtest /connection/b00000.vtc)

2018-12-05 Thread PiBa-NL

Hi Willy,

Op 3-12-2018 om 4:29 schreef Willy Tarreau:

Hi Pieter,

On Mon, Dec 03, 2018 at 12:30:37AM +0100, PiBa-NL wrote:

Hi List,

When running regtest /connection/b0.vtc with the added setting below:

defaults

     option http-use-htx

The test no longer completes successfully, is this 'by design' ? It seems
its not closing the server connection? Or at least not logging so, which i
guess is why the log line isn't emitted at the same 'expected' time?

It's not expected that the connection is not closed,
After some additional testing I think the connection is and was properly 
closed. Just the logging was a bit strange.

however since the log
happens at slightly different stages it could be possible that it's just a
matter of event logging. In any case we'll have to have a look.
Weird additional issue was that adding a 'option htttplog' changed the 
number of syslog lines produced. Instead of 2 connect lines, it shows 1 
http log line. That is with the 'old' -dev9 version, current master 
branch seems to have fixed this already.


By the way we've been thinking how to easily run the same tests with and
without HTX. I wanted to support "ifdef" in the config but it's a bit late
now. For the time being, I found that we could do something very ugly like
having something like this in the config :

  ${HTX} http-use-htx

and set the "HTX" variable to either "option" or "desc" (latter being to
ignore the keyword by making it the proxy's description). With minor
changes to the config parser (dropping leading empty words), we could
have "$NOHTX" option http-use-htx and set "NOHTX" to "no" to disable
it.


I'm not sure how feasible this is in the end. I like the ability to also 
just run a test directly outside of the run-regtests.sh script. And get 
the same pass/fail result.
When using such a $NOHTX 'trick' The test would need to always be run 
twice with different parameters to perform a 'complete' test of all 
haproxy features in the current build. Also what would the minimum 
required haproxy version for such a test become, would it be 1.9 or 1.8 
which wouldn't know what 'no option http-use-htx' means.? And even then 
the specific /connection/b0.vtc test currently succeeds only with 
htx if also the HTTP/1.1 is changed to HTTP/2.0 in the expected syslog 
output. Which i'm not sure if it is or isn't a desired effect/change.?.


When testing with 1.6 the /connection/b0.vtc needs to be run without 
threads. And with 1.5 the config would need to change again to not 
include fd@ sockets, which seems impossible though as varnishtest 
automatically adds a 'admin socket' into the config utilizing such a fd@ 
socket.


In the end it probably becomes tricky to use 1 'master set' of tests 
that can be run against any older version, without duplicating tests or 
including some more advanced ifdef constructions as you already 
mentioned. Which i think would need to be supported by the 'testing 
framework', not by haproxy itself. Maybe 'vtest' will eventually have 
such abilities? Or perhaps the vtc test file would need a little 
pre-processing, before passing it to varnishtest. Though that doesn't 
really make things easier/faster either..



Cheers,
Willy


Regards,

PiBa-NL (Pieter)




behavior change when enabling HTX (in regtest /connection/b00000.vtc)

2018-12-02 Thread PiBa-NL

Hi List,

When running regtest /connection/b0.vtc with the added setting below:

defaults

    option http-use-htx

The test no longer completes successfully, is this 'by design' ? It 
seems its not closing the server connection? Or at least not logging so, 
which i guess is why the log line isn't emitted at the same 'expected' time?


Regards,

PiBa-NL (Pieter)




regtest failure for /cache/h00000.vtc, config parsing fails? after commit 7805e2b

2018-12-01 Thread PiBa-NL

Hi Christopher, List

A recent commit 
http://git.haproxy.org/?p=haproxy.git;a=commit;h=7805e2bc1faf04169866c801087fd794535ecbb2 
seems to have broken config parsing for some configurations as seen with 
reg-test: /cache/h0.vtc


Using: haproxy version: 1.9-dev8-7805e2b

***  h1    0.0 debug|[ALERT] 334/200950 (30141) : Proxy 'test': unable 
to find the cache 'my_cache' referenced by http-response cache-store rule.
***  h1    0.0 debug|[ALERT] 334/200950 (30141) : Proxy 'test': unable 
to find the cache 'my_cache' referenced by http-request cache-use rule.
***  h1    0.0 debug|[ALERT] 334/200950 (30141) : Proxy 'test': unable 
to find the cache 'my_cache' referenced by the filter 'cache'.
***  h1    0.0 debug|[ALERT] 334/200950 (30141) : Fatal errors found in 
configuration.


Can you take a look? Thanks in advance :).

Regards,

PiBa-NL (Pieter)




[PATCH] REGTEST: lua: check socket functionality from a lua-task

2018-11-30 Thread PiBa-NL

Hi List, Willy, Frederic, Adis,

Attached the same reg-test as send previously, this time as a .patch .

Created after the issue was reported here: 
https://www.mail-archive.com/haproxy@formilux.org/msg31924.html but 
should be part of the general tests when running regression-checks.


It can be back-ported until haproxy 1.6.
It does fail on the current 1.9-dev8-51e01b5 version, and back until 
1.9-dev5-3e1f68b .


Hope its okay like this :).

Regards,

PiBa-NL (Pieter)

From 29b2e82eb8461a2994841ba5460583c8acf5cddc Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Fri, 30 Nov 2018 21:01:01 +0100
Subject: [PATCH] REGTEST: lua: check socket functionality from a lua-task

Adding a new test /reg-tests/lua/b4.vtc which checks if the core.tcp()
socket basic functions properly when used from a lua-task
---
 reg-tests/lua/b4.lua | 44 
 reg-tests/lua/b4.vtc | 34 +++
 2 files changed, 78 insertions(+)
 create mode 100644 reg-tests/lua/b4.lua
 create mode 100644 reg-tests/lua/b4.vtc

diff --git a/reg-tests/lua/b4.lua b/reg-tests/lua/b4.lua
new file mode 100644
index ..3ad14fe5
--- /dev/null
+++ b/reg-tests/lua/b4.lua
@@ -0,0 +1,44 @@
+
+local vtc_port = 0
+
+core.register_service("fakeserv", "http", function(applet)
+   vtc_port = applet.headers["vtcport"][0]
+   core.Info("APPLET START")
+   local response = "OK"
+   applet:add_header("Server", "haproxy/webstats")
+   applet:add_header("Content-Length", string.len(response))
+   applet:add_header("Content-Type", "text/html")
+   applet:start_response()
+   applet:send(response)
+   core.Info("APPLET DONE")
+end)
+
+local function cron()
+   -- wait for until the correct port is set through the c0 request..
+   while vtc_port == 0 do
+   core.msleep(1)
+   end
+   core.Debug('CRON port:' .. vtc_port)
+
+   local socket = core.tcp()
+   local success = socket:connect("127.0.0.1", vtc_port)
+   core.Info("SOCKET MADE ".. (success or "??"))
+   if success ~= 1 then
+   core.Info("CONNECT SOCKET FAILED?")
+   return
+   end
+   local request = "GET / HTTP/1.1\r\n\r\n"
+   core.Info("SENDING REQUEST")
+   socket:send(request)
+   local result = ""
+   repeat
+   core.Info("4")
+   local d = socket:receive("*a")
+   if d ~= nil then
+   result = result .. d
+   end
+   until d == nil or d == 0
+   core.Info("Received: "..result)
+end
+
+core.register_task(cron)
\ No newline at end of file
diff --git a/reg-tests/lua/b4.vtc b/reg-tests/lua/b4.vtc
new file mode 100644
index ..91b5dde3
--- /dev/null
+++ b/reg-tests/lua/b4.vtc
@@ -0,0 +1,34 @@
+varnishtest "Lua: check socket functionality from a lua-task"
+feature ignore_unknown_macro
+
+#REQUIRE_OPTIONS=LUA
+#REQUIRE_VERSION=1.6
+
+server s1 {
+rxreq
+txresp -bodylen 20
+} -start
+
+haproxy h1 -conf {
+global
+lua-load ${testdir}/b4.lua
+
+frontend fe1
+mode http
+bind "fd@${fe1}"
+default_backend b1
+
+backend b1
+mode http
+http-request use-service lua.fakeserv
+
+} -start
+
+client c0 -connect ${h1_fe1_sock} {
+txreq -url "/" -hdr "vtcport: ${s1_port}"
+rxresp
+expect resp.status == 200
+} -run
+
+
+server s1 -wait
\ No newline at end of file
-- 
2.18.0.windows.1



Re: BUG: Lua tasks can't use client sockets after bf89ff3d

2018-11-29 Thread PiBa-NL

Hi Frederic, Adis,

Op 29-11-2018 om 14:53 schreef Frederic Lecaille:

Hi Adis,

On 11/29/18 10:03 AM, Adis Nezirovic wrote:

On Thu, Nov 29, 2018 at 09:03:34AM +0100, Willy Tarreau wrote:

OK thanks, I'll take a look at it once I've flushed my pending stuff on
H2+HTX :-(


Great, I had my morning coffee and visited my optometrist, so here is
a fixed test script (correctly setting Host header).

P.S.
Lua usually suffers trying to do things in tasks, I don't think this is
the first time something gets broken. Can we make reg test with Lua
script (maybe strip out LuaSocket requirement)?



Yes. There already exist LUA reg tests in reg-tests/lua directory.

Fred.

Indeed some LUA tests already exists, but it didn't check to use a 
socket from a task.


Attached a new test which does, and does indeed fail on versions since 
the mentioned commit.
Should i make a patch out of it for inclusion in git? Or can you guys do 
that once the fix is also ready.? i think it was the preferred to get 
bugfix+regtest 'linked' then ?


Regards,
PiBa-NL (Pieter)

varnishtest "Lua: txn:get_priv() scope"
feature ignore_unknown_macro

#REQUIRE_OPTIONS=LUA
#REQUIRE_VERSION=1.6

server s1 {
rxreq
txresp -bodylen 20
} -start

haproxy h1 -conf {
global
lua-load ${testdir}/b4.lua

frontend fe1
mode http
bind "fd@${fe1}"
default_backend b1

backend b1
mode http
http-request use-service lua.fakeserv

} -start

client c0 -connect ${h1_fe1_sock} {
txreq -url "/" -hdr "vtcport: ${s1_port}"
rxresp
expect resp.status == 200
} -run


server s1 -wait
local vtc_port = 0

core.register_service("fakeserv", "http", function(applet)
   vtc_port = applet.headers["vtcport"][0]
core.Info("APPLET START")
local response = "OK"
applet:add_header("Server", "haproxy/webstats")
applet:add_header("Content-Length", string.len(response))
applet:add_header("Content-Type", "text/html")
applet:start_response()
applet:send(response)
core.Info("APPLET DONE")
end)

local function cron()
-- wait for until the correct port is set through the c0 request..
while vtc_port == 0 do
core.msleep(1)
end
core.Debug('CRON port:' .. vtc_port)

   local socket = core.tcp()
   local success = socket:connect("127.0.0.1", vtc_port)
core.Info("SOCKET MADE ".. (success or "??"))
if success ~= 1 then
core.Info("CONNECT SOCKET FAILED?")
return
end
local request = "GET / HTTP/1.1\r\n\r\n"
core.Info("SENDING REQUEST")
socket:send(request)
local result = ""
repeat
core.Info("4")
local d = socket:receive("*a")
if d ~= nil then
result = result .. d
end
until d == nil or d == 0
core.Info("Received: "..result)
end

core.register_task(cron)

reg-test failure for /connection/b00000.vtc after commit 3e1f68b

2018-11-29 Thread PiBa-NL

Hi Olivier, List,

It seems one of the reg-tests /connection/b0.vtc is failing after 
this recent commit.


http://git.haproxy.org/?p=haproxy.git;a=commit;h=3e1f68bcf9adfcd30e3316b0822c2626cc2a6a84

Using HA-Proxy version 1.9-dev8-3e1f68b 2018/11/29 Some of the output 
looks like this:


***  h1    0.0 debug|Using kqueue() as the polling mechanism.
 Slog_1  0.0 syslog|<133>Nov 29 22:44:25 haproxy[79765]: Proxy http 
started.
 Slog_1  0.0 syslog|<133>Nov 29 22:44:25 haproxy[79765]: Proxy 
ssl-offload-http started.
***  h1    0.0 debug|:ssl-offload-http.accept(0005)=000d from 
[::1:59078] ALPN=h2
***  h1    0.0 debug|:ssl-offload-http.clireq[000d:]: 
POST /1 HTTP/1.1
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
user-agent: curl/7.60.0
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
accept: */*
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
content-length: 3
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
content-type: application/x-www-form-urlencoded
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
host: [::1]:37611

***  h1    0.0 debug|:ssl-offload-http.srvcls[000d:adfd]
***  h1    0.0 debug|:ssl-offload-http.clicls[000d:adfd]
***  h1    0.0 debug|:ssl-offload-http.closed[000d:adfd]
 Slog_1  0.0 syslog|<134>Nov 29 22:44:25 haproxy[79765]: ::1:59078 
[29/Nov/2018:22:44:25.752] ssl-offload-http~ ssl-offload-http/http 
0/0/0/-1/1 400 187 - - CH-- 1/1/0/0/0 0/0 "POST /1 HTTP/1.1"
**   Slog_1  0.0 === expect ~ "Connect from .* to 
${h1_ssl_addr}:${h1_ssl_port}"

 Slog_1  0.0 EXPECT FAILED ~ "Connect from .* to ::1:37611"
...
 top   0.1 shell_out|  % Total    % Received % Xferd  Average 
Speed   Time    Time Time  Current
 top   0.1 shell_out| Dload Upload   
Total   Spent    Left  Speed
 top   0.1 shell_out|\r  0 0    0 0    0 0 0  0 
--:--:-- --:--:-- --:--:-- 0\r100    93    0    90 100 3   
5625    187 --:--:-- --:--:-- --:--:--  5812

 top   0.1 shell_out|HTTP/2 400 \r
 top   0.1 shell_out|cache-control: no-cache\r
 top   0.1 shell_out|content-type: text/html\r
 top   0.1 shell_out|\r
 top   0.1 shell_out|400 Bad request
 top   0.1 shell_out|Your browser sent an invalid request.
 top   0.1 shell_out|


While it should look like this where offloaded traffic is forwarded to a 
second http frontend:


***  h1    0.0 debug|:ssl-offload-http.accept(0005)=000d from 
[::1:48710] ALPN=h2
***  h1    0.0 debug|:ssl-offload-http.clireq[000d:]: 
POST /1 HTTP/1.1
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
user-agent: curl/7.60.0
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
accept: */*
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
content-length: 3
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
content-type: application/x-www-form-urlencoded
***  h1    0.0 debug|:ssl-offload-http.clihdr[000d:]: 
host: [::1]:48188
***  h1    0.0 debug|0001:http.accept(0008)=0017 from [::1:48710] 
ALPN=

***  h1    0.0 debug|0001:http.clireq[0017:]: POST /1 HTTP/1.1
***  h1    0.0 debug|0001:http.clihdr[0017:]: user-agent: 
curl/7.60.0

***  h1    0.0 debug|0001:http.clihdr[0017:]: accept: */*
***  h1    0.0 debug|0001:http.clihdr[0017:]: content-length: 3
***  h1    0.0 debug|0001:http.clihdr[0017:]: content-type: 
application/x-www-form-urlencoded

***  h1    0.0 debug|0001:http.clihdr[0017:]: host: [::1]:48188
 Slog_1  0.0 syslog|<134>Nov 29 22:18:35 haproxy[70605]: Connect 
from ::1:48710 to ::1:48188 (http/HTTP)
**   Slog_1  0.0 === expect ~ "Connect from .* to 
${h1_ssl_addr}:${h1_ssl_port}"


Do you see the same? Is more info needed?

Thanks in advance :)

Regards,

PiBa-NL (Pieter)




Re: [PATCH] REGTEST/MINOR: script: add run-regtests.sh script

2018-11-29 Thread PiBa-NL

Hi Frederic,

Op 29-11-2018 om 19:18 schreef Frederic Lecaille:

On 11/29/18 8:47 AM, Willy Tarreau wrote:

On Thu, Nov 29, 2018 at 05:36:35AM +0100, Willy Tarreau wrote:
However I'm well aware that it's easier to work on improvements once 
the

script is merged, so what I've done now is to merge it and create a
temporary "reg-tests2" target in the makefile to use it without losing
the existing one. This way everyone can work in parallel, and once the
few issues seem reliably addressed, we can definitely replace the make
target.


Unfortunately ENOCOFFEE struck me this morning and I forgot to commit
my local changes so I merged the unmodified version which replaces the
"reg-test" target.

Thus now we're condemned to quickly fix these small issues :-)


Pieter,

I am having a look at all these issues.

Regards,

Fred.


If that means i don't have to do anything at this moment, thank you! (i 
suppose your turnaround time from issue>fix will also be shorter than 
waiting for my spare evening hour..)
Ill start checking some requirements of existing .vtc . And perhaps 
writing new one's.


Regards,

PiBa-NL (Pieter)




Re: [PATCH] REGTEST/MINOR: script: add run-regtests.sh script

2018-11-27 Thread PiBa-NL

Hi Frederic, Willy,

Op 27-11-2018 om 15:00 schreef Frederic Lecaille:

On 11/27/18 10:44 AM, Frederic Lecaille wrote:

On 11/27/18 9:52 AM, Willy Tarreau wrote:

Hi guys,

On Tue, Nov 27, 2018 at 09:45:25AM +0100, Frederic Lecaille wrote:

I put the script in the /reg-tests/ folder. Maybe it should have been
besides the Makefile in the / root ?


Yes I think it should be placed at the same level as the Makefile.


Well, we already have a "scripts" directory with the stuff used for
release and backport management. I think it perfectly has its place
there.

/scripts/ sounds good.


Note that the reg tests must be run from the Makefile with 
"reg-tests" target and possibly other arguments/variables.

Willy recently added REG_TEST_FILES variable.


I've changed the the script to include the LEVEL parameter almost the 
way the Makefile used it, changed behavior though so without the 
parameter it it runs all tests.




I am sorry Pieter a remaining detail I should have mentioned before:

+  for i in $(find $TESTDIR/ -type d -name "vtc.*");
+  do
+    echo "## $(cat $i/INFO) ##"
+    echo "## test results in: $i"
+    grep --  $i/LOG
+
+    echo "## $(cat $i/INFO) ##" >> $TESTDIR/failedtests.log
+    echo "## test results in: $i" >> $TESTDIR/failedtests.log
+    grep --  $i/LOG >> $TESTDIR/failedtests.log
+    echo >> $TESTDIR/failedtests.log
+  done

may be shortened thanks to tee command like that:

 cat <<- EOF | tee $TESDIR/failedtests.log
 .
 .
 EOF

Removed some spaces for indentation which became part of the output.


I have tested you script. For me it is OK. Good job!
Thank you a lot Pieter.


OK just let me know what to do with this, should I merge it as-is and
expect minor updates later, or do you or Pieter want to resend an
updated version ? I can adapt, let me know.


I have modified Pieter's patch for the modification mentioned above.
Seems to work ;)


Willy,

Here is a better patch which takes into an account the modification
above and yours (the script is added in "tests" directory).
I think Willy mentioned a 'scripts' directory? Changed patch to include 
that as well.


You can merge it as-is.

Regards,

Fred


New path attached, which includes a LEVEL check.
And a modification of the Makefile to call the ./scripts/run-regtests.sh

Please can someone check it again before merging.?. Thanks guys :).

Regards,
PiBa-NL (Pieter)

From 989bf7ccbfd849deed450291121cdcc68796ba64 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Tue, 27 Nov 2018 22:26:38 +0100
Subject: [PATCH] REGTEST/MINOR: script: add run-regtests.sh script

Some tests require a minimal haproxy version or compilation options to be
able to run successfully. This script allows to add 'requirements' to tests
to check so they will automatically be skipped if a requirement is not met.
The script supports several parameters to slightly modify its behavior
including the directories to search for tests.

Also some features are not available for certain OS's these can also
be 'excluded', this should allow for the complete set of test cases to be
run on any OS against any haproxy release without 'expected failures'.

The test .vtc files will need to be modified to include their 'requirements'
by listing including text options as shown below:
#EXCLUDE_TARGETS=dos,freebsd,windows
#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA
#REQUIRE_VERSION=0.0
#REQUIRE_VERSION_BELOW=99.9,
When excluding a OS by its TARGET, please do make a comment why the test
can not succeed on that TARGET.
---
 Makefile|  25 +---
 scripts/run-regtests.sh | 317 
 2 files changed, 318 insertions(+), 24 deletions(-)
 create mode 100644 scripts/run-regtests.sh

diff --git a/Makefile b/Makefile
index 6d7a0159..aa6d66b9 100644
--- a/Makefile
+++ b/Makefile
@@ -1094,28 +1094,5 @@ opts:
 # LEVEL 3 scripts are low interest scripts (prefixed with 'l' letter).
 # LEVEL 4 scripts are in relation with bugs they help to reproduce (prefixed 
with 'b' letter).
 reg-tests:
-   $(Q)if [ ! -x "$(VARNISHTEST_PROGRAM)" ]; then \
-   echo "Please make the VARNISHTEST_PROGRAM variable point to the 
location of the varnishtest program."; \
-   exit 1; \
-   fi
-   $(Q)export LEVEL=$${LEVEL:-1}; \
-   if [ $$LEVEL = 1 ] ; then \
-  EXPR='h*.vtc'; \
-   elif [ $$LEVEL = 2 ] ; then \
-  EXPR='s*.vtc'; \
-   elif [ $$LEVEL = 3 ] ; then \
-  EXPR='l*.vtc'; \
-   elif [ $$LEVEL = 4 ] ; then \
-  EXPR='b*.vtc'; \
-   fi ; \
-   if [ -n "$(REG_TEST_FILES)" ] ; then \
-  err=0; \
-  for n in $(REG_TEST_FILES); do \
- HAPROXY_P

[PATCH] REGTEST/MINOR: script: add run-regtests.sh script

2018-11-25 Thread PiBa-NL

Hi Frederic, Willy,

Added the varnishtest script we have been discussing as a .patch this time.

I put the script in the /reg-tests/ folder. Maybe it should have been 
besides the Makefile in the / root ?


Also i put a bit of comments into the commit.

I hope it is okay like this? If not, feel free to comment on them or 
change them as required.


Once this one is 'accepted' ill create a few patches for the existing 
.vtc files to include their requirements. (at least the more obvious ones..)


Regards,
PiBa-NL (Pieter)

From 4432c10a0a822619c152aa187f18b2f6478ac565 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 25 Nov 2018 16:46:44 +0100
Subject: [PATCH] REGTEST/MINOR: script: add run-regtests.sh script

Some tests require a minimal haproxy version or compilation options to be
able to run successfully. This script allows to add 'requirements' to tests
to check so they will automatically be skipped if a requirement is not met.
The script supports several parameters to slightly modify its behavior
including the directories to search for tests.

Also some features are not available for certain OS's these can also
be 'excluded', this should allow for the complete set of test cases to be
run on any OS against any haproxy release without 'expected failures'.

The test .vtc files will need to be modified to include their 'requirements'
by listing including text options as shown below:
#EXCLUDE_TARGETS=dos,freebsd,windows
#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA
#REQUIRE_VERSION=0.0
#REQUIRE_VERSION_BELOW=99.9,
When excluding a OS by its TARGET, please do make a comment why the test
can not succeed on that TARGET.
---
 reg-tests/run-regtests.sh | 303 ++
 1 file changed, 303 insertions(+)
 create mode 100644 reg-tests/run-regtests.sh

diff --git a/reg-tests/run-regtests.sh b/reg-tests/run-regtests.sh
new file mode 100644
index ..1094117f
--- /dev/null
+++ b/reg-tests/run-regtests.sh
@@ -0,0 +1,303 @@
+#!/usr/bin/env sh
+
+if [ "$1" = "--help" ]; then
+  cat << EOF
+### run-regtests.sh ###
+  Running run-regtests.sh --help shows this information about how to use it
+
+  Run without parameters to run all tests in the current folder (including 
subfolders)
+run-regtests.sh
+
+  Provide paths to run tests from (including subfolders):
+run-regtests.sh ./tests1 ./tests2
+
+  Parameters:
+--j , To run varnishtest with multiple jobs / threads for a faster 
overall result
+  run-regtests.sh ./fasttest --j 16
+
+--v, to run verbose
+  run-regtests.sh --v, disables the default varnishtest 'quiet' parameter
+
+--varnishtestparams , passes custom ARGS to varnishtest
+  run-regtests.sh --varnishtestparams "-n 10"
+
+  Including text below into a .vtc file will check for its requirements 
+  related to haproxy's target and compilation options
+# Below targets are not capable of completing this test succesfully
+#EXCLUDE_TARGET=freebsd, abns sockets are not available on freebsd
+
+#EXCLUDE_TARGETS=dos,freebsd,windows
+
+# Below option is required to complete this test succesfully
+#REQUIRE_OPTION=OPENSSL, this test needs OPENSSL compiled in.
+ 
+#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA
+
+# To define a range of versions that a test can run with:
+#REQUIRE_VERSION=0.0
+#REQUIRE_VERSION_BELOW=99.9
+
+  Configure environment variables to set the haproxy and varnishtest binaries 
to use
+setenv HAPROXY_PROGRAM /usr/local/sbin/haproxy
+setenv VARNISHTEST_PROGRAM /usr/local/bin/varnishtest
+EOF
+  return
+fi
+
+_startswith() {
+  _str="$1"
+  _sub="$2"
+  echo "$_str" | grep "^$_sub" >/dev/null 2>&1
+}
+
+_findtests() {
+  set -f
+  for i in $( find "$1" -name "*.vtc" ); do
+skiptest=
+require_version="$(grep "#REQUIRE_VERSION=" "$i" | sed -e 's/.*=//')"
+require_version_below="$(grep "#REQUIRE_VERSION_BELOW=" "$i" | sed -e 
's/.*=//')"
+require_options="$(grep "#REQUIRE_OPTIONS=" "$i" | sed -e 's/.*=//')"
+exclude_targets=",$(grep "#EXCLUDE_TARGETS=" "$i" | sed -e 's/.*=//'),"
+
+if [ -n "$require_version" ]; then
+  if [ $(_version "$HAPROXY_VERSION") -lt $(_version "$require_version") 
]; then
+echo "  Skip $i because option haproxy is version: $HAPROXY_VERSION"
+echo "REASON: this test requires at least version: 
$require_version"
+skiptest=1
+  fi
+fi
+if [ -n "$require_version_below" ]; then
+  if [ $(_version "$HAPROXY_VERSION") -ge $(_version 
"$require_version_below") ]; then
+echo "  Skip $i because option

Re: reg-test failures on FreeBSD, how to best adapt/skip some tests?

2018-11-22 Thread PiBa-NL

Hi Frederic,

I still have a '
' newline, with the IFS= but the \n and \012 didnt seem to work there..


Strangely on my PC with both bash and dash I do not have to change
IFS value to parse HAPROXY_VERSION, TARGET and OPTIONS with "read"
internal command.
Reading version,target and options works fine indeed without, however 
the loop over test files fails if any .vtc file has a space character in 
its filename. Or should we 'forbid' that with documentation.? Or is 
there another better workaround for that?


I do not think it is a good idea to build TESTDIR like that:
   TESTRUNDATETIME="$(date '+%Y-%m-%d_%H-%M-%S')"
   TESTDIR=${TMPDIR:-/tmp}/varnishtest_haproxy/$TESTRUNDATETIME
   mkdir -p $TESTDIR
What if we run the tests several times at the same time?
Well they would have to run at the same second. Not sure if that would 
be wise to do.. But mktemp now solved that i guess :) at least for the 
directory name part..

Please have a look to mkstemp utility.
Without the 's' right? Done, combined with the rundatetime which i do 
like, so its 'readable' and unique, best of both ways i think?.

Remaining details:
    cat $i/LOG | grep -- 
should be replaced by
    grep --  $i/LOG

I guess ill never learn do do this right the first time around ;). Fixed.
Note that you script is plenty of '\r' characters with no newline 
character at the end of file position:
Not sure what the 'correct' way would be. I think there is a CR LF 
everywhere at the moment? And the scripts hashbang tries to point to 
'sh', will this be a issue.? (acme.sh does the same, and seems to be run 
an lots of systems..) And if so what can i best do to avoid issues?

Also note that some shells do not like == operator (at line 3):

Used a single = now.

When I do not set both HAPROY_PROGRAM I get this output with a script
with a successful result.


Checks added to avoid this issue for both haproxy and varnishtest so 
they are checked to exist.


Next round :).

Regards,

PiBa-NL (Pieter)

#!/usr/bin/env sh

if [ "$1" = "--help" ]; then
  cat << EOF
### run-regtests.sh ###
  Running run-regtests.sh --help shows this information about how to use it

  Run without parameters to run all tests in the current folder (including 
subfolders)
run-regtests.sh

  Provide paths to run tests from (including subfolders):
run-regtests.sh ./tests1 ./tests2

  Parameters:
--j , To run varnishtest with multiple jobs / threads for a faster 
overall result
  run-regtests.sh ./fasttest --j 16

--v, to run verbose
  run-regtests.sh --v, disables the default varnishtest 'quiet' parameter

--varnishtestparams , passes custom ARGS to varnishtest
  run-regtests.sh --varnishtestparams "-n 10"

  Including text below into a .vtc file will check for its requirements 
  related to haproxy's target and compilation options
# Below targets are not capable of completing this test succesfully
#EXCLUDE_TARGET=freebsd, abns sockets are not available on freebsd

#EXCLUDE_TARGETS=dos,freebsd,windows

# Below option is required to complete this test succesfully
#REQUIRE_OPTION=OPENSSL, this test needs OPENSSL compiled in.
 
#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA

# To define a range of versions that a test can run with:
#REQUIRE_VERSION=0.0
#REQUIRE_VERSION_BELOW=99.9

  Configure environment variables to set the haproxy and varnishtest binaries 
to use
setenv HAPROXY_PROGRAM /usr/local/sbin/haproxy
setenv VARNISHTEST_PROGRAM /usr/local/bin/varnishtest
EOF
  return
fi

_startswith() {
  _str="$1"
  _sub="$2"
  echo "$_str" | grep "^$_sub" >/dev/null 2>&1
}

_findtests() {
  #find "$1" -name "*.vtc" | while read i; do
  set -f
  for i in $( find "$1" -name "*.vtc" ); do
#echo "TESTcheck '$i'"

skiptest=
require_version="$(grep "#REQUIRE_VERSION=" "$i" | sed -e 's/.*=//')"
require_version_below="$(grep "#REQUIRE_VERSION_BELOW=" "$i" | sed -e 
's/.*=//')"
require_options="$(grep "#REQUIRE_OPTIONS=" "$i" | sed -e 's/.*=//')"
exclude_targets=",$(grep "#EXCLUDE_TARGETS=" "$i" | sed -e 's/.*=//'),"

if [ -n "$require_version" ]; then
  if [ $(_version "$HAPROXY_VERSION") -lt $(_version "$require_version") ]; 
then
echo "  Skip $i because option haproxy is version: $HAPROXY_VERSION"
echo "REASON: this test requires at least version: $require_version"
skiptest=1
  fi
fi
if [ -n "$require_version_below" ]; then
  if [ $(_version "$HAPROXY_VERSION") -ge $(

Re: varnishtest with H2>HTX>H1(keep-alive)

2018-11-20 Thread PiBa-NL

Hi Christopher, Willy,

Op 20-11-2018 om 12:09 schreef Christopher Faulet:


Hi,

The H2 is not yet compatible with the HTX for now. So you should never 
use both in same time. However, this configuration error should be 
detected during the configuration parsing, to avoid runtime errors. 
Here is a patch to do so. I'll merge it.


Thanks
--
Christopher Faulet 


Thanks the 'old' config which tried to combine H2 and HTX is now 
rejected. (as expected)


I guess i misinterpreted the 'need' for HTX for the H2 conversion and 
features which i thought would include the new keep-alive this version 
brings, but i guess those are separate things. Keepalive for a H1 
backend coming from a H2 frontend works fine without using that option.


New testcase attached, one that actually works! , regarding H2 > H1 with 
keepalive. (without HTX option though..) It shows as 'passed' when run.


I did notice there is one line regarding the 'double logging' I have got 
configured though which I'm not sure is supposed to happen, its seems to 
be because i'm having both stdout and :514 logging should that not be 
possible?:
***  h1    0.0 debug|[ALERT] 323/233813 (5) : sendmsg()/writev() 
failed in logger #2: Socket operation on non-socket (errno=38)


Partial config:
  global
    log stdout format raw daemon
    log :1514 local0

I'm using "HA-Proxy version 1.9-dev7-7ff4f14 2018/11/20" this time.

Or is it (again) something i'm configuring wrongly ? ;) .

Regards,
PiBa-NL (Pieter)

# h2 with h1 backend connection reuse check

varnishtest "h2 with h1 backend connection reuse check"
feature ignore_unknown_macro

#REQUIRE_VERSION=1.9

server s1 {
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

server s2  {
  stream 0 {
rxsettings
txsettings -ack
  } -run
  stream 1 {
rxreq
txresp -bodylen 200
  } -run
  stream 3 {
rxreq
txresp -bodylen 200
  } -run
} -start

server s3 -repeat 2 {
  rxreq
  txresp -gziplen 200
} -start

server s4 {
  timeout 3
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

server s5 {
  timeout 3
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

haproxy h1 -conf {
  global
#nbthread 3
log stdout format raw daemon
log :1514 local0
stats socket /tmp/haproxy.socket level admin

  defaults
mode http
#option dontlog-normal
log global
option httplog
timeout connect 3s
timeout client  40s
timeout server  40s

  listen fe1
bind "fd@${fe1}"
server srv1 ${s1_addr}:${s1_port}

  listen fe3
bind "fd@${fe3}" proto h2
server srv3 ${s3_addr}:${s3_port}

  listen fe4
bind "fd@${fe4}" proto h2
server srv4 ${s4_addr}:${s4_port}

  listen fe5
bind "fd@${fe5}" ssl crt /usr/ports/net/haproxy-devel/test/common.pem alpn 
h2 
server srv5 ${s5_addr}:${s5_port} 

} -start


client c1 -connect ${h1_fe1_sock} {
txreq -url "/1"
rxresp
expect resp.status == 200
txreq -url "/2"
rxresp
expect resp.status == 200
} -start
client c1 -wait

client c2 -connect ${s2_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
expect resp.status == 200
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
expect resp.status == 200
} -run
} -start
client c2 -wait

client c3 -connect ${h1_fe3_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
expect resp.status == 200
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
expect resp.status == 200
} -run
} -start
client c3 -wait

client c4 -connect ${h1_fe4_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
   stream 1 {
  txreq -req GET -url /3
rxresp
expect resp.status == 200
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
expect resp.status == 200
} -run
} -start
client c4 -wait

shell {
HOST=${h1_fe5_addr}
if [ "${h1_fe5_addr}" = "::1" ] ; then
HOST="\[::1\]"
fi
curl --http2 -i -k https://$HOST:${h1_fe5_port}/CuRLtesT_1/ 
https://$HOST:${h1_fe5_port}/CuRLtesT_2/
}

server s1 -wait
server s2 -wait
server s3 -wait
server s4 -wait
server s5 -wait


Re: reg-test failures on FreeBSD, how to best adapt/skip some tests?

2018-11-19 Thread PiBa-NL

Hi Frederic, Willy,

Hello Pieter,


Do you intend to finalize this script? We would like to use it in 
haproxy sources.
Note that varnishtest already uses TMPDIR variable in place of /tmp if 
it is set in the environment.


Thanks again.

Fred.

Thanks for your advices and comments, to be honest i haven't looked at 
the script for several days, got distracted by other things ;). So sorry 
for late reply.


Just cleaned it up a bit. I guess its ready for another review.

I still have a '
' newline, with the IFS= but the \n and \012 didnt seem to work there..

I've tried to incorporate all suggestions. Lemme know if/what i missed :)

Regards,

PiBa-NL (Pieter)

#!/usr/bin/env sh

if [ "$1" == "--help" ]; then
  cat << EOF
### run-regtests.sh ###
  Running run-regtests.sh --help shows this information about how to use it

  Run without parameters to run all tests in the current folder (including 
subfolders)
run-regtests.sh

  Provide paths to run tests from (including subfolders):
run-regtests.sh ./tests1 ./tests2

  Parameters:
--j , To run varnishtest with multiple jobs / threads for a faster 
overall result
  run-regtests.sh ./fasttest --j 16

--v, to run verbose
  run-regtests.sh --v, disables the default varnishtest 'quiet' parameter

--varnishtestparams , passes custom ARGS to varnishtest
  run-regtests.sh --varnishtestparams "-n 10"

  Including text below into a .vtc file will check for its requirements 
  related to haproxy's target and compilation options
# Below targets are not capable of completing this test succesfully
#EXCLUDE_TARGET=freebsd, abns sockets are not available on freebsd

#EXCLUDE_TARGETS=dos,freebsd,windows

# Below option is required to complete this test succesfully
#REQUIRE_OPTION=OPENSSL, this test needs OPENSSL compiled in.
 
#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA

# To define a range of versions that a test can run with:
#REQUIRE_VERSION=0.0
#REQUIRE_VERSION_BELOW=99.9

  Configure environment variables to set the haproxy and varnishtest binaries 
to use
setenv HAPROXY_PROGRAM /usr/local/sbin/haproxy
setenv VARNISHTEST_PROGRAM /usr/local/bin/varnishtest
EOF
  return
fi

_startswith() {
  _str="$1"
  _sub="$2"
  echo "$_str" | grep "^$_sub" >/dev/null 2>&1
}

_findtests() {
  #find "$1" -name "*.vtc" | while read i; do
  IFS='
'
  set -f
  for i in $( find "$1" -name "*.vtc" ); do
#echo "TESTcheck '$i'"

skiptest=
require_version="$(grep "#REQUIRE_VERSION=" "$i" | sed -e 's/.*=//')"
require_version_below="$(grep "#REQUIRE_VERSION_BELOW=" "$i" | sed -e 
's/.*=//')"
require_options="$(grep "#REQUIRE_OPTIONS=" "$i" | sed -e 's/.*=//')"
exclude_targets=",$(grep "#EXCLUDE_TARGETS=" "$i" | sed -e 's/.*=//'),"

if [ -n "$require_version" ]; then
  if [ $(_version "$HAPROXY_VERSION") -lt $(_version "$require_version") ]; 
then
echo "  Skip $i because option haproxy is version: $HAPROXY_VERSION"
echo "REASON: this test requires at least version: $require_version"
skiptest=1
  fi
fi
if [ -n "$require_version_below" ]; then
  if [ $(_version "$HAPROXY_VERSION") -ge $(_version 
"$require_version_below") ]; then
echo "  Skip $i because option haproxy is version: $HAPROXY_VERSION"
echo "REASON: this test requires a version below: 
$require_version_below"
skiptest=1
  fi
fi

if [ -n "$( echo "$exclude_targets" | grep ",$TARGET," )" ]; then
  echo "  Skip $i because exclude_targets"
  echo "REASON: exclude_targets '$exclude_targets' contains '$TARGET'"
  skiptest=1
fi

#echo "REQUIRE_OPTIONS : $require_options" 
for requiredoption in $(echo $require_options | tr "," "\012" ); do
  if [ -z "$( echo "$OPTIONS" | grep "USE_$requiredoption=1" )" ]
  then
echo "  Skip $i because option $requiredoption not found"
echo -n "REASON: "
echo -n "$required" | sed -e 's/.*,//' -e 's/^[[:space:]]//'
echo
skiptest=1
  fi
done
for required in "$(grep "#REQUIRE_OPTION=" "$i")";
do
  if [ -z "$required" ]
  then
continue
  fi
  requiredoption=$(echo "$required" | sed -e 's/.*=//' -e 's/,.*//')
  if [ -z "$

Re: varnishtest with H2>HTX>H1(keep-alive)

2018-11-19 Thread PiBa-NL

Hi Willy,

Op 19-11-2018 om 4:37 schreef Willy Tarreau:

Hi Pieter,

On Mon, Nov 19, 2018 at 01:07:44AM +0100, PiBa-NL wrote:

Hi List,

I'm trying (and failing?) to write a H2>HTX>H1(keepalive) test.

Using haproxy 1.9-dev6-05b9b64.

Test vtc attached, i added the 'option http-use-htx' to the fe4
frontend/backend.
Is there anything else that should be changed?

For HTX, you need dev7.
Ah crap, i 'thought' i took the latest commit from the online branch. 
And stopped looking properly.  I must have been a few minutes to soon or 
something, after i read the dev7 mail.. (and i should have checked i did 
actually compile the expected version..) (which i totally didn't do in 
the excitement about htx and the late time..)



Or is my way of making the H2 request incorrect? Though the 3 tests before
it 'seem' to work alright.

I've never tested varnishtest on h2 yet, I don't know precisely how
it's supposed to be configured, nor what minimum version is required
for this. From an haproxy perspective, your config looks OK.

By the way, in my opinion you should remove "option http-keep-alive"
since it's the default
That was a remnant of a previous try to get keep-alive with h2 which 
then wasn't supposed to work with dev5 yet.

, and I think the config would be more readable
by replacing the "frontend"+"backend" couples with a single "listen"
for each.
Okay true, listen sections would make the config more readable :) . I'm 
just used to make a frontend+backend for almost 'everything'.. (Usually 
i have multiple backends behind one frontend anyhow..)  And also i 
'think' it shows more clearly in the logging output if it did or didn't 
get passed from frontend to the backend, but maybe thats just my 
imagination.



Below the output i get, with a unexpected '500' status, and with a IC-- on
the logline... It also seems it never contacted the s4 server.

Indeed, it faced an internal while trying to make the connection.
What I think happened is that H2 decoded the request (and logged it),
but the upper layer stream failed to do anything from it since it's
configured with HTX and HTX is not supported in your version. Please
note that even with dev7 you don't have H2 over HTX yet. You
definitely need to update to dev7 to eliminate a number of candidate
bugs.


Without the htx option it does make 1 request to the s4, and the second
expected request tries to make a second connection. (the 'old' way..)

Without the latest changes from dev7 it's expecetd since by default,
server-side keep-alive is lost when H2 is used on the frontend (the
connection used to be tied to the stream, so since you have a distinct
stream for each request, you used to lose the connection). In dev7 the
server-side idle connection moved to the session which is attached to
the front connection so the keep-alive will still work.

Okay new try with "1.9-dev7-1611965".

Hoping this helps,
Willy


However that still doesn't work yet (as also already seen by Frederic):

**   c4    0.2 === txreq -req GET -url /3
***  c4    0.2 tx: stream: 1, type: HEADERS (1), flags: 0x05, size: 37
**   c4    0.2 === rxresp
***  h1    0.2 debug|0007:fe4.accept(000e)=0010 from [::1:13402] 
ALPN=

 h1    0.2 STDOUT poll 0x11
***  c4    0.2 HTTP2 rx EOF (fd:6 read: No error: 0)
 c4    0.2 could not get frame header
**   c4    0.2 Ending stream 1
***  c4    0.2 closing fd 6
**   c4    0.2 Ending
*    top   0.2 RESETTING after ./PB-TEST/h2-keepalive-backend.vtc
**   h1    0.2 Reset and free h1 haproxy 31909
**   h1    0.2 Wait
**   h1    0.2 Stop HAproxy pid=31909
**   h1    0.2 WAIT4 pid=31909 status=0x008b (user 0.013928 sys 0.00)
*    h1    0.2 Expected exit: 0x0 signal: 0 core: 0
 h1    0.2 Bad exit status: 0x008b exit 0x0 signal 11 core 128
*    top   0.2 failure during reset
#    top  TEST ./PB-TEST/h2-keepalive-backend.vtc FAILED (0.169) exit=2
root@freebsd11:/usr/ports/net/haproxy-devel # haproxy -v
HA-Proxy version 1.9-dev7-1611965 2018/11/19
Copyright 2000-2018 Willy Tarreau 

So i guess the question remains, is the test configured wrongly, or is 
some other improvement still needed?
(I guess improvement in this case surely is needed as crashing is never 
the right way, even if the input might be 'wrong'.)


Regards,
PiBa-NL (Pieter)




varnishtest with H2>HTX>H1(keep-alive)

2018-11-18 Thread PiBa-NL

Hi List,

I'm trying (and failing?) to write a H2>HTX>H1(keepalive) test.

Using haproxy 1.9-dev6-05b9b64.

Test vtc attached, i added the 'option http-use-htx' to the fe4 
frontend/backend.

Is there anything else that should be changed?
Or is my way of making the H2 request incorrect? Though the 3 tests 
before it 'seem' to work alright.


Below the output i get, with a unexpected '500' status, and with a IC-- 
on the logline... It also seems it never contacted the s4 server.
Without the htx option it does make 1 request to the s4, and the second 
expected request tries to make a second connection. (the 'old' way..)


Thanks in advance.

Regards
PiBa-NL (Pieter)

**   c4    0.2 === txreq -req GET -url /3
***  c4    0.2 tx: stream: 1, type: HEADERS (1), flags: 0x05, size: 37
**   c4    0.2 === rxresp
***  h1    0.2 debug|0007:fe4.accept(000e)=0011 from [::1:25432] 
ALPN=

***  h1    0.2 debug|0007:fe4.clireq[0011:]: GET /3 HTTP/1.1
***  h1    0.2 debug|0007:b4.clicls[0011:adfd]
***  h1    0.2 debug|0007:b4.closed[0011:adfd]
***  h1    0.2 debug|::1:25432 [19/Nov/2018:01:03:11.550] fe4 b4/srv4 
0/0/-1/-1/0 500 203 - - IC-- 2/1/0/0/0 0/0 "GET /3 HTTP/1.1"

***  c4    0.2 rx: stream: 1, type: HEADERS (1), flags: 0x04, size: 26
***  c4    0.2 flag: END_TYPE_HEADERS
 c4    0.2 header[ 0]: :status : 500
 c4    0.2 header[ 1]: cache-control : no-cache
 c4    0.2 header[ 2]: content-type : text/html
***  c4    0.2 rx: stream: 1, type: DATA (0), flags: 0x00, size: 96
***  c4    0.2 rx: stream: 1, type: DATA (0), flags: 0x01, size: 0
***  c4    0.2 flag: END_STREAM
 c4    0.2 s1 - no data
**   c4    0.2 === expect resp.status == 200
 c4    0.2 EXPECT resp.status (500) == "200" failed

# h2 with h1 backend connection reuse check

# the c3 > h1 > s3 test works (wrongly?) because haproxy breaks connection to 
the server, and creates a new one..
# the c4 > h1 > s4 test fails because haproxy breaks connection to the server, 
while it should keep the connection alive.


varnishtest "h2 with h1 backend connection reuse check"
feature ignore_unknown_macro

server s1 {
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

server s2  {
  stream 0 {
rxsettings
txsettings -ack
  } -run
  stream 1 {
rxreq
txresp -bodylen 200
  } -run
  stream 3 {
rxreq
txresp -bodylen 200
  } -run
} -start

server s3 -repeat 2 {
  rxreq
  txresp -gziplen 200
} -start

server s4 {
  timeout 3
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

haproxy h1 -conf {
  global
#nbthread 3
log stdout format raw daemon
#log :1514 local0
stats socket /tmp/haproxy.socket level admin

  defaults
mode http
#option dontlog-normal
log global
option httplog
timeout connect 3s
timeout client  40s
timeout server  40s

  frontend fe1
bind "fd@${fe1}"
default_backend b1

  backend b1
option http-keep-alive
server srv1 ${s1_addr}:${s1_port}

  frontend fe3
bind "fd@${fe3}" proto h2
default_backend b3

  backend b3
server srv3 ${s3_addr}:${s3_port}

  frontend fe4
bind "fd@${fe4}" proto h2
default_backend b4
option http-use-htx

  backend b4
option http-keep-alive
server srv4 ${s4_addr}:${s4_port}
option http-use-htx

} -start


client c1 -connect ${h1_fe1_sock} {
txreq -url "/1"
rxresp
expect resp.status == 200
txreq -url "/2"
rxresp
expect resp.status == 200
} -start
client c1 -wait

client c2 -connect ${s2_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
expect resp.status == 200
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
expect resp.status == 200
} -run
} -start
client c2 -wait

client c3 -connect ${h1_fe3_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
expect resp.status == 200
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
expect resp.status == 200
} -run
} -start
client c3 -wait

client c4 -connect ${h1_fe4_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
   stream 1 {
  txreq -req GET -url /3
rxresp
expect resp.status == 200
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
expect resp.status == 200
} -run
} -start
client c4 -wait

server s1 -wait
server s2 -wait
server s3 -wait
server s4 -wait


Re: Some patches about the master worker

2018-11-06 Thread PiBa-NL

Hi William,

Something seems to have been broken by below patch series. (when using 
threads.?.)


***  h1    0.0 debug|[ALERT] 309/191142 (6588) : Current worker #1 
(6589) exited with code 134 (Abort trap)
***  h1    0.0 debug|[ALERT] 309/191142 (6588) : exit-on-failure: 
killing every workers with SIGTERM


Yes i'm using -W in the varnishtest, and yes that adds a 1 second delay 
or something, but that never prevented this test from succeeding for me 
before..


Can you take a look?

Regards,
PiBa-NL (Pieter)

Op 6-11-2018 om 18:33 schreef Willy Tarreau:

On Tue, Nov 06, 2018 at 05:37:09PM +0100, William Lallemand wrote:

Some improvements for the master-worker.

Thanks, whole series merged. I've replaced the warning with a qfprintf()
as we discussed so that it's less scary at boot :-)  I think we'd benefit
from having ha_notice(), ha_info() and ha_debug() in complement to the
existing ha_alert() and ha_warning(). This would greatly help display
runtime info. And then you could replace your reload warning with a more
suitable notice.

Willy



# Checks that request and connection counters are properly kept

varnishtest "Connection counters check"
feature ignore_unknown_macro

server s1 {
rxreq
expect req.http.TESTsize == 10
txresp
} -repeat 4 -start

haproxy h1 -W -conf {
  global
nbthread 3
stats socket /tmp/haproxy.socket level admin

  defaults
mode http
log global
option httplog
timeout connect 3s
timeout client  40s
timeout server  40s

  frontend fe1
maxconn 200
bind "fd@${fe_1}"
acl donelooping hdr(TEST) -m len 10
http-request set-header TEST "%[hdr(TEST)]x"
use_backend b2 if donelooping
default_backend b1

  backend b1
server srv1 ${h1_fe_1_addr}:${h1_fe_1_port}

  backend b2
fullconn 200
# haproxy 1.8 does not have the ,length converter.
#acl OK hdr(TEST) -m len 500
#http-request deny deny_status 200 if OK
#http-request deny deny_status 400

# haproxy 1.9 does have a ,length converter.
http-request set-header TESTsize "%[hdr(TEST),length]"
http-request del-header TEST
server srv2 ${s1_addr}:${s1_port}

} -start

barrier b1 cond 4

client c1 -connect ${h1_fe_1_sock} {

  timeout 17
barrier b1 sync
txreq -url "/1"
rxresp
expect resp.status == 200
} -start
client c2 -connect ${h1_fe_1_sock} {
  timeout 17
barrier b1 sync
txreq -url "/2"
rxresp
expect resp.status == 200
} -start
client c3 -connect ${h1_fe_1_sock} {
  timeout 17
barrier b1 sync
txreq -url "/3"
rxresp
expect resp.status == 200
} -start
client c4 -connect ${h1_fe_1_sock} {
  timeout 17
barrier b1 sync
txreq -url "/4"
rxresp
expect resp.status == 200
} -start

client c1 -wait
client c2 -wait
client c3 -wait
client c4 -wait

# allow a little time to close connections.
delay 1

haproxy h1 -cli {
send "show info"
expect ~ "CurrConns: 0 *\\nCumConns: 41*\\nCumReq: 81"
#send "show activity"
#expect ~ "hoi"
}


Re: enabling H2 slows down my webapp, how to use keep-alive on backend ssl connection?

2018-10-29 Thread PiBa-NL

Hi Lukas,
Op 29-10-2018 om 16:39 schreef Lukas Tribus:

Hi,


On Sun, 28 Oct 2018 at 23:47, PiBa-NL  wrote:

Hi List,

When i enable H2 'alpn h2,http/1.1' on haproxy bind line with offloading
'mode http'. The overall loading of a web-application i use takes longer
than without. (Tried with 1.9-dev5 and previous versions)

The webapp loads around 25 objects of css/js/images on a page, and when
using H1 it uses 4 keep-alive connections to retrieve all objects.

However when enabling H2 on the frontend the connection to the webserver
(which itself is also made with SSL encryption) is made for every single
requested object i suspect this is the main reason for the slowdown, it
now needs to perform the ssl handshake on the backend 25 times.

Is this by (current) design? Is it planned/possible this will be changed
before 1.9 release?

Yes and yes, this is what will be fixed be the native HTTP
representation (codenamed HTX), hopefully this is something we will be
able to play with in 1.9-dev6.


Regards,
Lukas


I wasn't sure if the H1 keep-alive connection on backend was supposed to 
work already (coming through a H2 frontend).

Ill give it another try with dev6 or above.

Thanks for your confirmation that this part is still a work in progress :).

Regards,
PiBa-NL (Pieter)




enabling H2 slows down my webapp, how to use keep-alive on backend ssl connection?

2018-10-28 Thread PiBa-NL

Hi List,

When i enable H2 'alpn h2,http/1.1' on haproxy bind line with offloading 
'mode http'. The overall loading of a web-application i use takes longer 
than without. (Tried with 1.9-dev5 and previous versions)


The webapp loads around 25 objects of css/js/images on a page, and when 
using H1 it uses 4 keep-alive connections to retrieve all objects.


However when enabling H2 on the frontend the connection to the webserver 
(which itself is also made with SSL encryption) is made for every single 
requested object i suspect this is the main reason for the slowdown, it 
now needs to perform the ssl handshake on the backend 25 times.


Is this by (current) design? Is it planned/possible this will be changed 
before 1.9 release?


Or is it likely my configuration / conclusion is wrong?

I've added a little vtc trying to simulate the behavior, it currently 
fails on " s4    0.2 HTTP rx failed (fd:10 read: Connection reset by 
peer)" while that is where the s4 server expects a second request over 
its keep-alive connection. (assuming i wrote the test correctly..) While 
it 'should' fail on the s3 server.


Regards,

PiBa-NL (Pieter)

# h2 with h1 backend connection reuse check

# the c3 > h1 > s3 test works (wrongly?) because haproxy breaks connection to 
the server, and creates a new one..
# the c4 > h1 > s4 test fails because haproxy breaks connection to the server, 
while it should keep the connection alive.


varnishtest "h2 with h1 backend connection reuse check"
feature ignore_unknown_macro

server s1 {
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

server s2  {
  stream 0 {
rxsettings
txsettings -ack
  } -run
  stream 1 {
rxreq
txresp -bodylen 200
  } -run
  stream 3 {
rxreq
txresp -bodylen 200
  } -run
} -start

server s3 -repeat 2 {
  rxreq
  txresp -gziplen 200
} -start

server s4 {
  rxreq
  txresp -gziplen 200
  rxreq
  txresp -gziplen 200
} -start

haproxy h1 -W -conf {
  global
#nbthread 3
log :1514 local0
stats socket /tmp/haproxy.socket level admin

  defaults
mode http
#option dontlog-normal
log global
option httplog
timeout connect 3s
timeout client  40s
timeout server  40s

  frontend fe1
bind "fd@${fe1}"
default_backend b1

  backend b1
option http-keep-alive
server srv1 ${s1_addr}:${s1_port}

  frontend fe3
bind "fd@${fe3}" proto h2
default_backend b3

  backend b3
server srv3 ${s3_addr}:${s3_port}

  frontend fe4
bind "fd@${fe4}" proto h2
default_backend b4

  backend b4
option http-keep-alive
server srv4 ${s4_addr}:${s4_port}

} -start


client c1 -connect ${h1_fe1_sock} {
txreq -url "/1"
rxresp
expect resp.status == 200
txreq -url "/2"
rxresp
expect resp.status == 200
} -start
client c1 -wait

client c2 -connect ${s2_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
} -run
} -start
client c2 -wait

client c3 -connect ${h1_fe3_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
} -run
} -start
client c3 -wait

client c4 -connect ${h1_fe4_sock} {
stream 0 {
txsettings -hdrtbl 0
rxsettings
} -run
stream 1 {
  txreq -req GET -url /3
rxresp
} -run
stream 3 {
  txreq -req GET -url /4
rxresp
} -run
} -start
client c4 -wait

server s1 -wait
server s2 -wait
server s3 -wait
server s4 -wait


Re: 'http-response cache-store icons if { path_beg /icons }' produces crashes and/or random behavior

2018-10-28 Thread PiBa-NL

Hi Willy,

Op 28-10-2018 om 20:21 schreef Willy Tarreau:

Hi Pieter,

On Sun, Oct 28, 2018 at 12:49:44AM +0200, PiBa-NL wrote:

Hello Chad, List,

Thanks for the nice article
https://www.haproxy.com/blog/introduction-to-haproxy-acls/

However one of the examples that shows how to use cache-store seems flawed..

Attached ive made a little varnishtest, that:

- fails to run success-full when repeated 100 times with the path_beg acl on
1.8.14 (some requests are send twice to the s1 server, which stops listening
after 1..) but its about 6% of runs that fails..

There is a bug in this config, which is reported by an haproxy warning :
Agreed the configuration is 'wrong', and haproxy says it will 'never' 
match, but well results above show it did match in 94% of runs, so it 
was all of the following: user-issue, documentation-issue and a 
software-issue. Documentation and software are both fixed now, thanks. 
And at least the user will get consistent results ;).


***  h10.0 debug|[WARNING] 300/201711 (22373) : parsing 
[/tmp/vtc.22366.5fa5c991/h1/cfg:37] : acl 'WeCanSafelyCacheThatFile' will never 
match because it only involves keywords that are incompatible with 'backend 
http-response header rule'
***  h10.0 debug|[WARNING] 300/201711 (22373) : parsing 
[/tmp/vtc.22366.5fa5c991/h1/cfg:38] : acl 'WeCanSafelyCacheThatFile' will never 
match because it only involves keywords that are incompatible with 'backend 
http-response header rule'

Indeed, the ACL references the "path" sample fetch function in the
response, which is not available.


- produces core dumps with 1.9-dev4-1ff7633

I've just managed to reproduce it and fix it (thanks for your report
and the reproducer). I *suspect* 1.8 and older are not safer, but that
the way the buffers work there make it dereference a wrong (but existing)
memory area, thus it doesn't crash.
Thanks for the quick fix, with latest 1.9 code the test 'fails' 
consistently for the right reason.

Using the var(txn.path) instead it succeeds on both versions.

Indeed, it's expected since the path is lost once the request leaves.

Thanks!
Willy


Regards,

PiBa-NL (Pieter)




Re: reg-test failures on FreeBSD, how to best adapt/skip some tests?

2018-10-28 Thread PiBa-NL

Hi Frederic,

Op 19-10-2018 om 11:51 schreef Frederic Lecaille:
The idea of the script sounds good to me. About the script itself it 
is a nice work which could be a good start.

Thanks.


Just a few details below.

Note that "cat $file | grep something" may be shortened by "grep 
something $file". It would also be interesting to avoid creating 
temporary files as most as possible (at least testflist.lst is not 
necessary I think).


TARGET=$(haproxy -vv | grep "TARGET  = " | sed 's/.*= //')
OPTIONS=$(haproxy -vv | grep "OPTIONS = " | sed 's/.*= //')

may be shortened by these lines:

    { read TARGET; read OPTIONS; } << EOF
    $(./haproxy -vv |grep 'TARGET\|OPTIONS' | sed 's/.* = //')
    EOF

Thanks, I've changed this.

which is portable, or something similar.

    sed 's/.*=//'| sed 's/,.*//'

may be shortened by

    sed -e 's/.*=//' -e 's/,.*//'

Thanks, I've changed this as well.



Also note, there are some cases where options are enabled without 
appearing in OPTIONS variable value.


For instance if you compile haproxy like that:

   $ make TARGET=linux2628

the support for the thread is enabled without appearing in OPTIONS 
variable value. I am not sure this is an issue at this time.
That could become an issue. but should be easy to solve adding a 'fake' 
option perhaps to check against.. Or adding a separate check perhaps I'm 
not sure yet.


Perhaps we could could only one line for the required options and 
excluded targets like that:


#EXCLUDED_TARGETS=freebsd,dos,windows ;)
#REQUIRED_OPTIONS=OPENSSL,LUA,ZLIB
Added this option, but then i like the option of excluding single 
targets and having some comment behind it explaining the reason.. But i 
guess if one would want to know some comments above the setting could 
also 'explain' why that target is currently not 'valid' to run the test 
on. Should i remove the settings for the 'single' option/target.?


New 'version' of the script attached.
It now supports a set of parameters to modify its behavior a little. And 
also checking for a 'version requirement'. So a H2 test doesn't have to 
fail on 1.7.
Should i 'ask' to delete old test result.? Or would there be a other 
better way to keep previous results separated from the current run?
If you could give it another look i would appreciate that. Are there any 
things that need to be added/changed about it before considering to add 
it to haproxy sources branch?


Regards,

PiBa-NL (Pieter)

#!/usr/bin/env sh

if [ "$1" == "--help" ]; then
  cat << EOF
### run-regtests.sh ###
  Running run-regtests.sh --help shows this information about how to use it

  Run without parameters to run all tests in the current folder (including 
subfolders)
run-regtests.sh

  Provide paths to run tests from (including subfolders):
run-regtests.sh ./tests1 ./tests2

  Parameters:
--j , To run varnishtest with multiple jobs / threads for a faster 
overall result
  run-regtests.sh ./fasttest --j 16

--v, to run verbose
  run-regtests.sh --v, disables the default varnishtest 'quiet' parameter

--varnishtestparams , passes custom ARGS to varnishtest
  run-regtests.sh --varnishtestparams "-n 10"

--f, force deleting old /tmp/*.vtc results without asking

  Including text below into a .vtc file will check for its requirements 
  related to haproxy's target and compilation options
# Below targets are not capable of completing this test succesfully
#EXCLUDE_TARGET=freebsd, abns sockets are not available on freebsd

#EXCLUDE_TARGETS=dos,freebsd,windows

# Below option is required to complete this test succesfully
#REQUIRE_OPTION=OPENSSL, this test needs OPENSSL compiled in.
 
#REQUIRE_OPTIONS=ZLIB,OPENSSL,LUA

# To define a range of versions that a test can run with:
#REQUIRE_VERSION=0.0
#REQUIRE_VERSION_BELOW=99.9

EOF
  return
fi

_startswith() {
  _str="$1"
  _sub="$2"
  echo "$_str" | grep "^$_sub" >/dev/null 2>&1
}

_findtests() {
  #find "$1" -name "*.vtc" | while read i; do
  IFS='
'
  set -f
  for i in $( find "$1" -name "*.vtc" ); do
#echo "TESTcheck '$i'"

skiptest=
require_version="$(grep "#REQUIRE_VERSION=" "$i" | sed -e 's/.*=//')"
require_version_below="$(grep "#REQUIRE_VERSION_BELOW=" "$i" | sed -e 
's/.*=//')"
require_options="$(grep "#REQUIRE_OPTIONS=" "$i" | sed -e 's/.*=//')"
exclude_targets=",$(grep "#EXCLUDE_TARGETS=" "$i" | sed -e 's/.*=//'),"

if [ -n "$r

'http-response cache-store icons if { path_beg /icons }' produces crashes and/or random behavior

2018-10-27 Thread PiBa-NL

Hello Chad, List,

Thanks for the nice article 
https://www.haproxy.com/blog/introduction-to-haproxy-acls/


However one of the examples that shows how to use cache-store seems flawed..

Attached ive made a little varnishtest, that:

- fails to run success-full when repeated 100 times with the path_beg 
acl on 1.8.14 (some requests are send twice to the s1 server, which 
stops listening after 1..) but its about 6% of runs that fails..


- produces core dumps with 1.9-dev4-1ff7633

Using the var(txn.path) instead it succeeds on both versions.

I think its important to 'fix' the article (and perhaps include the 
cache section declaration also), and perhaps investigate why haproxy 
does seem to try and process the fetch it warns about on startup that it 
would/should never match..


Regards,

PiBa-NL (Pieter)

# Checks that basic cache is working

varnishtest "Checks that basic cache is working"
feature ignore_unknown_macro

server s1 {
rxreq
txresp -bodylen 50
} -start

syslog Slg_1 -level notice {
recv
expect ~ "Proxy fe1 started"
recv
expect ~ "Proxy b1 started"
recv info
expect ~ "fe1 b1/srv1"
recv info
expect ~ "fe1 b1/"
recv info
expect ~ "fe1 b1/"
recv info
expect ~ "fe1 b1/"
} -start

haproxy h1 -conf {
  global
#nbthread 3
log ${Slg_1_addr}:${Slg_1_port} local0
#log :1514 local0
#nokqueue
stats socket /tmp/haproxy.socket level admin
#log /tmp/log local0

  defaults
mode http
#option dontlog-normal
log global
option httplog
timeout connect 3s
timeout client  4s
timeout server  4s

  cache icons
total-max-size 10
max-age 60

  frontend fe1
bind "fd@${fe_1}"
default_backend b1

  backend b1
http-request set-var(txn.MyPath) path

#acl WeCanSafelyCacheThatFile var(txn.MyPath) -m beg /icons/
acl WeCanSafelyCacheThatFile path_beg /icons/

http-request cache-use icons if WeCanSafelyCacheThatFile
http-response cache-store icons if WeCanSafelyCacheThatFile
http-response add-header CacheResponse TRUE if WeCanSafelyCacheThatFile

server srv1 ${s1_addr}:${s1_port}

} -start

client c1 -connect ${h1_fe_1_sock} {
  timeout 5
txreq -url /icons/
rxresp
expect resp.status == 200

txreq -url /icons/
rxresp
expect resp.status == 200
} -repeat 2 -run

syslog Slg_1 -wait

Re: reg-test failures on FreeBSD, how to best adapt/skip some tests?

2018-10-18 Thread PiBa-NL

Hi Frederic,

Do you have a little time to take a look at (the idea of) the script? 
Thanks in advance :).


Op 3-10-2018 om 21:52 schreef PiBa-NL:

Hi Frederic,

Made a little script (attached)..
What do you think ? Could this work for current 'requirements'?


Regards,

PiBa-NL (Pieter)




Re: Logging real rather than load balancer IP

2018-10-17 Thread PiBa-NL

Hi Mark,

Op 17-10-2018 om 23:36 schreef Mark Holmes:


Question: We have some web apps which are behind an haproxy load 
balancer, with TLS being terminated on the server rather than at the 
balancer (so using tcp mode). The web server logs are recording the 
source IP as that of the load balancer as expected. I now have a 
requirement to pass the ‘real’ IP address through to the web 
application and also record it in the webserver logs. Currently, with 
other applications where TLS is terminated at the balancer and we are 
using http mode to connect to the backend web servers I use 
X-FORWARDED-FOR to pass through the ‘real’ IP address but obviously 
that won’t help me when using TCP mode. I read some stuff about using 
the PROXY protocol, but I’m running IIS 8.5 and as far as I can tell 
it doesn’t support PROXY. Am I correct?


My other option appears to be to switch to transparent proxying. I 
have verified the kernel I’m using is compiled with TPROXY support as 
is haproxy itself. Before I go down this road – is transparent 
proxying the correct/best option here?


Thanks in advance for any advice

Mark


There are 3 options to let a webserver know the client-IP.

-forwardfor  (only works with 'mode http' and needs webserver to know 
how to use that header)

-proxyprotocol (needs server to support it, and know how to use it.)
-TPROXY (needs routing for reply traffic through haproxy)

As you can see each has its own dis-advantage's.. And well with the 
first 2 already ruled out, the 3rd is your only option.. (that i know of 
anyhow..)


Regards,

PiBa-NL (Pieter)



Re: [PATCH] REGTEST/MINOR: loadtest: add a test for connection counters

2018-10-12 Thread PiBa-NL

Hi William,

Op 12-10-2018 om 10:53 schreef William Lallemand:

The attached patch should fix the issue.


The patch works for me, thanks.

Regards,

PiBa-NL (Pieter)




Re: [PATCH] REGTEST/MINOR: loadtest: add a test for connection counters

2018-10-11 Thread PiBa-NL

Hi Willy, William,


Op 2-10-2018 om 3:56 schreef Willy Tarreau:

it's important to cut this into pieces
to figure what it's showing.


A little update on this issue split in 2 pieces.

-Connection and request counters to low when ran as regtest from 
varnishtest (bug?)
It turns out that starting haproxy from varnishtest, and using -W 
master-worker mode, actually creates 2 processes that are handling 
traffic. That explains that a large part of connections isn't seen by 
the other haproxy instance and stats showing to low amounts of 
connections. Bisecting it seems to fail on this commit: b3f2be3 , 
perhaps @William can you take a look at it? Not really sure when this 
occurs in a 'real' environment, it doesn't seem to happen when manually 
running haproxy -W, but still its strange that when varnisttest is 
calling haproxy this occurs.


-Request counter to high (possibly a improvement request?)
The http_end_txn_clean_session(..) is called which increments the 
counter on a finished http response, and i was testing with 2 different 
methods for 1.8 and 1.9 due to missing length converter i used in my 1.9 
test, which makes the comparison unfair. Sorry i didn't realize this 
earlier i thought it did 'more or less' the same, that seems to have 
been 'less'. Together with that i found the numbers odd/unexpected i 
assumed a bigger problem that it actually seems to be, perhaps its not 
even that bad, haproxy is 'preparing' for a second request over the 
keep-alive connection if i understand correctly. Which eventually 
doesn't happen, but is counted. Maybe that is a point that can be 
improved in a future version if time permits.? Or would it even be 
expected to behave like that?


Regards,
PiBa-NL (Pieter)




Re: [PATCH] REGTEST/MINOR: loadtest: add a test for connection counters

2018-10-04 Thread PiBa-NL

Hi Willy,

Op 2-10-2018 om 3:56 schreef Willy Tarreau:

it's important to cut this into pieces
to figure what it's showing.


Okay, the good thing i suppose we already have a reproduction sort-off..

What would be the best way to try and get to the bottom of this? Add 
debug output to haproxy code? Get some kind of trace? Anything in 
particular that would be of interest?


Regards,

PiBa-NL (Pieter)




  1   2   3   4   >