Re: [PATCH] wurfl device detection build fixes and dummy library

2019-06-12 Thread Willy Tarreau
Hi Ben,

On Wed, Apr 24, 2019 at 06:20:16PM +0200, Christopher Faulet wrote:
> Le 24/04/2019 à 17:56, Ben Shillito a écrit :
> > Hi Willy,
> > 
> > Thanks for the update. We will take a look and get a patch over to you.
> > 
> 
> Hi,
> 
> Ben, the function _51d_fetch() is not HTX aware. Take
> a look at other HTTP sample fetches in src/http_fetch.c.

I'm just realizing that we never got your patch for this. Do you have
a patch handy or should we currently write a quick one to prevent one
from starting with both 51d and HTX enabled ? HTX is on by default in
2.0 and the final release is expected around this week-end...

Please just let me know, at least I don't want users to meet random
crashes when using this code.

Thanks,
Willy



Re: [PATCH] wurfl device detection build fixes and dummy library

2019-06-12 Thread Willy Tarreau
On Wed, Jun 12, 2019 at 08:52:53AM +, Ben Shillito wrote:
> Hi Willy,
> 
> This unfortunately fell down our list of priorities in the last few weeks.

Oh I certainly can understand, and I'm sorry I forgot about it so I didn't
ping you earlier.

> However, as this is a bit more urgent now with your weekend release, I will
> get the change for HTX awareness to you either today or tomorrow if that is
> ok with you?

OK that's perfect, thank you! It will be less work on my side to merge your
fix than to write the verification code to refuse to start!

Thanks,
Willy



Re: Idea + question regarding the build targets

2019-06-12 Thread Tim Düsterhus
Willy,

Am 12.06.19 um 07:38 schrieb Willy Tarreau:
> So my questions are :
>   - does anybody think it's a bad idea ?

I do. Even if the Linux distribution in question theoretically supports
a certain feature you are:

1. Not guaranteed that the development headers are installed.

As a specific example: Debian ships systemd (and by extension
libsystemd) by default. But I would not be able to compile HAProxy with
USE_SYSTEMD without installing libsystemd-dev first.

2. Not guaranteed that the user is interested in certain features.

I guess most are interested in OpenSSL support, but Lua might be more
questionable. Why should I carry the Lua interpreter along when I'm not
using Lua scripts anyway?

Personally I would leave Lua out on most of my servers if I would
compile my own HAProxy [1]. This is because I realized, by doing stupid
but fun things with Lua, that Lua support is way less reliable than
HAProxy's C code. Possibly because most people don't need Lua and thus
it gets less exposure in production.

(1) could be solved with feature detection (e.g. autoconf, but you
probably don't want to touch that). (2) not so much.

Both could be solved with some kind of wizard that asks the user what
they want and detects the appropriate options and flags based on the
system. I believe something similar exists for the Linux kernel (I never
compiled my own kernel).

Best regards
Tim Düsterhus

[1] Big if there. I'm super happy with the job the Debian HAProxy team
is doing by providing a separate APT repository for the newest versions.



RE: [PATCH] wurfl device detection build fixes and dummy library

2019-06-12 Thread Ben Shillito
Hi Willy,

Great, thanks for those changes, and good spot.

I agree that this is a significant step forward, and having the entire codebase 
testable in CI will certainly make everything that bit smoother.

Thanks,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 12 June 2019 17:07
To: Ben Shillito 
Cc: Christopher Faulet ; HAProxy 
Subject: Re: [PATCH] wurfl device detection build fixes and dummy library

On Wed, Jun 12, 2019 at 02:49:37PM +, Ben Shillito wrote:
> While I was working on the HTX changes, I thought it was probably a
> good time to also implement the dummy library as I had my brain in 'HAProxy 
> mode'.

Ah, excellent, thank you :

  $./haproxy -vv|grep -i 51d
  Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE -PCRE_JIT 
-PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM 
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
+CRYPT_H -VSYSCALL -GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 -ZLIB 
+SLZ +CPU_AFFINITY -TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD 
-OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
  Built with 51Degrees support.

:-)

Ilya will likely be happy to see that we can now build 100% of our codebase in 
the CI, this is a significant step forward!

I performed very minor changes to your patches, for the first one I marked it 
"BUG/MINOR" so that we backport it to 1.9 (since it's still broken there) and 
for the second I fixed the doc where you accidently dropped "51d" after 
"contrib/" in the build command :-)

Thanks a lot for your responsiveness,
Willy
This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.



Re: Increase in sockets in TIME_WAIT with 1.9.x

2019-06-12 Thread Dave Chiluk
I did a bit more introspection on our TIME_WAIT connections.  The increase
in sockets in TIME_WAIT is definitely from old connections to our backend
 server instances.  Considering the fact that this server is doesn't
actually serve real traffic we can make a reasonable assumptions that this
is almost entirely due to increases in healthchecks.

Doing an strace on haproxy 1.8.17 we see

sudo strace -e setsockopt,close -p 15743
strace: Process 15743 attached
setsockopt(17, SOL_TCP, TCP_NODELAY, [1], 4) = 0
setsockopt(17, SOL_SOCKET, SO_LINGER, {onoff=1, linger=0}, 8) = 0
close(17)   = 0


Doing the same strace on 1.9.8 we see

sudo strace -e setsockopt,close -p 6670
strace: Process 6670 attached
setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
close(4)= 0


The calls to setsockopt(17, SOL_SOCKET, SO_LINGER, {onoff=1, linger=0}, 8)
= 0
appear to be missing.

We are running centos 7 with kernel 3.10.0-957.1.3.el7.x86_64.

I'll keep digging into this, and see if I can get stack traces that result
in teh setsockopt calls on 1.8.17 so the stack can be more closely
inspected.

Thanks for any help,
Dave


On Tue, Jun 11, 2019 at 2:29 AM Willy Tarreau  wrote:

> On Mon, Jun 10, 2019 at 04:01:27PM -0500, Dave Chiluk wrote:
> > We are in the process of evaluating upgrading to 1.9.8 from 1.8.17,
> > and we are seeing a roughly 70% increase in sockets in TIME_WAIT on
> > our haproxy servers with a mostly idle server cluster
> > $ sudo netstat | grep 'TIME_WAIT' | wc -l
>
> Be careful, TIME_WAIT on the frontend is neither important nor
> representative of anything, only the backend counts.
>
> > Looking at the source/destination of this it seems likely that this
> > comes from healthchecks.  We also see a corresponding load increase on
> > the backend applications serving the healthchecks.
>
> It's very possible and problematic at the same time.
>
> > Checking the git logs for healthcheck was unfruitful.  Any clue what
> > might be going on?
>
> Normally we make lots of efforts to close health-check responses with
> a TCP RST (by disabling lingering before closing). I don't see why it
> wouldn't be done here. What OS are you running on and what do your
> health checks look like in the configuration ?
>
> Thanks,
> Willy
>


Re: [PATCH] wurfl device detection build fixes and dummy library

2019-06-12 Thread Willy Tarreau
On Wed, Jun 12, 2019 at 02:49:37PM +, Ben Shillito wrote:
> While I was working on the HTX changes, I thought it was probably a good time
> to also implement the dummy library as I had my brain in 'HAProxy mode'.

Ah, excellent, thank you :

  $./haproxy -vv|grep -i 51d
  Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE -PCRE_JIT 
-PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM 
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
+CRYPT_H -VSYSCALL -GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 -ZLIB 
+SLZ +CPU_AFFINITY -TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD 
-OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
  Built with 51Degrees support.

:-)

Ilya will likely be happy to see that we can now build 100% of our
codebase in the CI, this is a significant step forward!

I performed very minor changes to your patches, for the first one I marked
it "BUG/MINOR" so that we backport it to 1.9 (since it's still broken there)
and for the second I fixed the doc where you accidently dropped "51d" after
"contrib/" in the build command :-)

Thanks a lot for your responsiveness,
Willy



Re: Idea + question regarding the build targets

2019-06-12 Thread Olivier D
Hi,


Le mer. 12 juin 2019 à 19:19, Willy Tarreau  a écrit :

> Hi guys,
>
> On Wed, Jun 12, 2019 at 04:27:42PM +0200, Lukas Tribus wrote:
> (...)
> > I think it's a bad idea.
> >
> > Basically what Tim says (I was interrupted several times while writing
> > this email).
>
> OK, and this morning William told me he thought the same for the same
> reasons, so definitely I'm the one wrong here, which justifies that I
> asked. And I totally agree with your arguments.
>
> (...)
> > Those will be the interactions, with a lot of round-trips back and
> > forth. At least the user understands that USE_LUA=1 means that LUA
> > will be included. With TARGET=ubuntu18 the user may not even know what
> > LUA is, let alone that it's a dependency.
>
> I was actually thinking about enabling what I expect to be installed
> by default, i.e. zlib & openssl mainly. But at first I didn't want to
> address libraries as much as the default OS which involves a minimal
> kernel version and a libc. In the past we've had to guess quite a few
> times some settings, which were nicely addressed by the split on kernel
> 2.6.28, but it seems cleaner to support a kernel+libc combination, which
> is what made me think about distros, hence their respective libs.
>
> > So do we brake the build by
> > default and just document in INSTALL that in Ubuntu 18.04 the LUA
> > dependency must be fulfilled by installing the source package, which
> > can be either liblua5.1-0-dev, liblua5.2-dev or liblua5.3-dev?
>
> I agree.
>
> > INSTALL already contains suggestions about what USE_ flags to include,
> > as a matter of fact all 5 external libraries mentioned above are
> > suggested:
> >
> >
> http://git.haproxy.org/?p=haproxy.git;a=blob;f=INSTALL;h=9df17caf68c4f00cdaaaec2ec929f0af66e1d297;hb=HEAD#l35
> >
> >
> > To be honest, I don't think this is very common in OSS projects; most
> > of them use configure scripts that just include the library if the
> > headers are detected, or not link against it at all if it isn't there.
> > But we would brake the build here, which is different.
>
> OK. When discussing this with William, we figured it could be interesting
> instead to have some aliases which are maybe more symbolic, such as :
>
>   - linux-complete : full set of supported features, will simply fail
> if you don't have all libs installed, useful primarily for devs ;
>
>   - linux-recent : common set of features present by default on latest
> release of LTS distros at the time of releasing. I.e. could be the
> default if you have a reasonably up-to-date system ;
>
>   - linux-ancient : common set of features present by default on oldest
> supported LTS distros at the time of releasing. Should be a safe
> fallback for everyone who doesn't want to care more ;
>
>   - linux-minimal : basically just oldest supported LTS kernel + equivalent
> libc, would serve as a starting point to manually add USE_*..
>
> In fact I'm still a bit concerned by the fact that we continue to
> advertise 2.6.28 as the split point while no older kernels are still
> supported, and that some of the features placed there very likely still
> don't work well in embedded environments (openwrt for example).
>
> I'd like very much to get rid of this laughing "2.6.28" now but if we
> only use "linux" nobody will update it and we'll be back to the same
> situation in one or two versions. With the split above we can have
> fast moving targets ("complete", updated during dev; "recent", updated
> with new distro announces) and slow moving ones ("ancient", "minimal")
> and that might be a sweet spot.
>
> > > just like we already have for solaris/openbsd/freebsd for example
> >
> > None of these include external libraries though, like OpenSSL, LUA,
> > zlib, PCRE1/2. Only kernel features and "very core" libraries are
> > included with the solaris/BSD targets (just like linux2628). So the
> > build may brake because a basic and crucial threading lib is missing,
> > but not because LUA is not there. That's very different kind of build
> > failure.
>
> Agreed, probably that I was a bit too enthousiast by this idea and
> conflated several things. We should probably get back to platform and
> features separately. openssl, lua, zlib, pcre are in fact features if
> they are not present by default. I was happy to place some of them by
> default but if they are not necessarily present, let's not force :-)
>
> thanks guys for fueling the discussion.
>
> Willy
>

Sorry if I'm totally "out" on this point, but I was wondering why HAProxy
did not provide a simple "configure" script : all available options would
be listed (I know it's in the doc, but hey, how many people do actually
read it ? :p). and script will check if options chosen can be compiled (if
pcre2 is available, if openssl-devel is available, and so on ...).
You may then pick a default config, for example SSL always compiled in, LUA
not, ... if no other option are provided.

Olivier


Re: Idea + question regarding the build targets

2019-06-12 Thread Willy Tarreau
Hi guys,

On Wed, Jun 12, 2019 at 04:27:42PM +0200, Lukas Tribus wrote:
(...)
> I think it's a bad idea.
> 
> Basically what Tim says (I was interrupted several times while writing
> this email).

OK, and this morning William told me he thought the same for the same
reasons, so definitely I'm the one wrong here, which justifies that I
asked. And I totally agree with your arguments.

(...)
> Those will be the interactions, with a lot of round-trips back and
> forth. At least the user understands that USE_LUA=1 means that LUA
> will be included. With TARGET=ubuntu18 the user may not even know what
> LUA is, let alone that it's a dependency.

I was actually thinking about enabling what I expect to be installed
by default, i.e. zlib & openssl mainly. But at first I didn't want to
address libraries as much as the default OS which involves a minimal
kernel version and a libc. In the past we've had to guess quite a few
times some settings, which were nicely addressed by the split on kernel
2.6.28, but it seems cleaner to support a kernel+libc combination, which
is what made me think about distros, hence their respective libs.

> So do we brake the build by
> default and just document in INSTALL that in Ubuntu 18.04 the LUA
> dependency must be fulfilled by installing the source package, which
> can be either liblua5.1-0-dev, liblua5.2-dev or liblua5.3-dev?

I agree.

> INSTALL already contains suggestions about what USE_ flags to include,
> as a matter of fact all 5 external libraries mentioned above are
> suggested:
> 
> http://git.haproxy.org/?p=haproxy.git;a=blob;f=INSTALL;h=9df17caf68c4f00cdaaaec2ec929f0af66e1d297;hb=HEAD#l35
> 
> 
> To be honest, I don't think this is very common in OSS projects; most
> of them use configure scripts that just include the library if the
> headers are detected, or not link against it at all if it isn't there.
> But we would brake the build here, which is different.

OK. When discussing this with William, we figured it could be interesting
instead to have some aliases which are maybe more symbolic, such as :

  - linux-complete : full set of supported features, will simply fail
if you don't have all libs installed, useful primarily for devs ;

  - linux-recent : common set of features present by default on latest
release of LTS distros at the time of releasing. I.e. could be the
default if you have a reasonably up-to-date system ;

  - linux-ancient : common set of features present by default on oldest
supported LTS distros at the time of releasing. Should be a safe
fallback for everyone who doesn't want to care more ;

  - linux-minimal : basically just oldest supported LTS kernel + equivalent
libc, would serve as a starting point to manually add USE_*..

In fact I'm still a bit concerned by the fact that we continue to
advertise 2.6.28 as the split point while no older kernels are still
supported, and that some of the features placed there very likely still
don't work well in embedded environments (openwrt for example).

I'd like very much to get rid of this laughing "2.6.28" now but if we
only use "linux" nobody will update it and we'll be back to the same
situation in one or two versions. With the split above we can have
fast moving targets ("complete", updated during dev; "recent", updated
with new distro announces) and slow moving ones ("ancient", "minimal")
and that might be a sweet spot.

> > just like we already have for solaris/openbsd/freebsd for example
> 
> None of these include external libraries though, like OpenSSL, LUA,
> zlib, PCRE1/2. Only kernel features and "very core" libraries are
> included with the solaris/BSD targets (just like linux2628). So the
> build may brake because a basic and crucial threading lib is missing,
> but not because LUA is not there. That's very different kind of build
> failure.

Agreed, probably that I was a bit too enthousiast by this idea and
conflated several things. We should probably get back to platform and
features separately. openssl, lua, zlib, pcre are in fact features if
they are not present by default. I was happy to place some of them by
default but if they are not necessarily present, let's not force :-)

thanks guys for fueling the discussion.

Willy



RE: [PATCH] wurfl device detection build fixes and dummy library

2019-06-12 Thread Ben Shillito
Hi Willy,

This unfortunately fell down our list of priorities in the last few weeks. 
However, as this is a bit more urgent now with your weekend release, I will get 
the change for HTX awareness to you either today or tomorrow if that is ok with 
you?

Thanks,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 12 June 2019 09:14
To: Ben Shillito 
Cc: Christopher Faulet ; HAProxy 
Subject: Re: [PATCH] wurfl device detection build fixes and dummy library

Hi Ben,

On Wed, Apr 24, 2019 at 06:20:16PM +0200, Christopher Faulet wrote:
> Le 24/04/2019 à 17:56, Ben Shillito a écrit :
> > Hi Willy,
> >
> > Thanks for the update. We will take a look and get a patch over to you.
> >
>
> Hi,
>
> Ben, the function _51d_fetch() is not HTX aware. Take a look at other
> HTTP sample fetches in src/http_fetch.c.

I'm just realizing that we never got your patch for this. Do you have a patch 
handy or should we currently write a quick one to prevent one from starting 
with both 51d and HTX enabled ? HTX is on by default in
2.0 and the final release is expected around this week-end...

Please just let me know, at least I don't want users to meet random crashes 
when using this code.

Thanks,
Willy
This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.



Re: Idea + question regarding the build targets

2019-06-12 Thread Lukas Tribus
Hello Willy,


On Wed, 12 Jun 2019 at 07:39, Willy Tarreau  wrote:
>
> Hi all,
>
> I'm currently re-reading my notes for the upcoming release and found
> something I noted not that long ago regarding the TARGET variable of
> the makefile. The list of operating systems hasn't changed in a while
> and in parallel we've added a bunch of build options that do not solely
> depend on the kernel anymore, but more on features related to the whole
> environment (pcre, openssl, lua, systemd, zlib, and so on).
>
> I thought that now that the targets definition is very simple in the
> makefile, we could simply add distro names and versions and adjust the
> set of default USE_xxx variables based on this. We could thus have
> rhel{8,7,6,5}, debian{9,8,7,6}, ubuntu{18,16}, fedora{31,30,29} and so
> on, just like we already have for solaris/openbsd/freebsd for example,
> except that we generally only reference one version there. It would
> become easier to add new variants even to stable distros to ease the
> build for new users, who would just have to issue "make TARGET=debian9"
> for example, without having to figure if PCRE2 has to be specified or
> OPENSSL or any such thing.
>
> This this is fairly minor and basically just requires a few lines to
> be added to the makefile and to the doc, so I'm fine with including
> this to the final release.
>
> So my questions are :
>   - does anybody think it's a bad idea ?

I think it's a bad idea.

Basically what Tim says (I was interrupted several times while writing
this email).

For the build not to fail, all those libraries *and the source package
of it* have to be installed first. Even if (I imagine) some of those
libraries in some of the OS configurations are installed by default,
the source packages with the header files are still missing and the
build would fail for every single dependency by default, which are at
least 5 (PCRE and variants, OpenSSL, LUA, ZLIB, SYSTEMD).

By not requiring the user to think about the libraries, we are hiding
complexity, which will then come up on the mailing list, discourse and
github issues.

"my build fails with a libssl.h error" --> install libssl-dev
"now the build still fails, but with a libz.h error" --> install libz-dev
"now the build still fails, but with a systemd.h error" --> install
systemd-dev libraries


Those will be the interactions, with a lot of round-trips back and
forth. At least the user understands that USE_LUA=1 means that LUA
will be included. With TARGET=ubuntu18 the user may not even know what
LUA is, let alone that it's a dependency. So do we brake the build by
default and just document in INSTALL that in Ubuntu 18.04 the LUA
dependency must be fulfilled by installing the source package, which
can be either liblua5.1-0-dev, liblua5.2-dev or liblua5.3-dev?

INSTALL already contains suggestions about what USE_ flags to include,
as a matter of fact all 5 external libraries mentioned above are
suggested:

http://git.haproxy.org/?p=haproxy.git;a=blob;f=INSTALL;h=9df17caf68c4f00cdaaaec2ec929f0af66e1d297;hb=HEAD#l35


To be honest, I don't think this is very common in OSS projects; most
of them use configure scripts that just include the library if the
headers are detected, or not link against it at all if it isn't there.
But we would brake the build here, which is different.



> just like we already have for solaris/openbsd/freebsd for example

None of these include external libraries though, like OpenSSL, LUA,
zlib, PCRE1/2. Only kernel features and "very core" libraries are
included with the solaris/BSD targets (just like linux2628). So the
build may brake because a basic and crucial threading lib is missing,
but not because LUA is not there. That's very different kind of build
failure.



cheers,
lukas



RE: [PATCH] wurfl device detection build fixes and dummy library

2019-06-12 Thread Ben Shillito
Hi Willy,

No worries, that's fine.

While I was working on the HTX changes, I thought it was probably a good time 
to also implement the dummy library as I had my brain in 'HAProxy mode'.

Both patches are attached ready to be merged.

If there is anything missing, let me know and can I'll try to make sure you 
have everything you need for a weekend release.

Thanks,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 12 June 2019 10:12
To: Ben Shillito 
Cc: Christopher Faulet ; HAProxy 
Subject: Re: [PATCH] wurfl device detection build fixes and dummy library

On Wed, Jun 12, 2019 at 08:52:53AM +, Ben Shillito wrote:
> Hi Willy,
>
> This unfortunately fell down our list of priorities in the last few weeks.

Oh I certainly can understand, and I'm sorry I forgot about it so I didn't ping 
you earlier.

> However, as this is a bit more urgent now with your weekend release, I
> will get the change for HTX awareness to you either today or tomorrow
> if that is ok with you?

OK that's perfect, thank you! It will be less work on my side to merge your fix 
than to write the verification code to refuse to start!

Thanks,
Willy
This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.


0001-FEAT-51d-htx-The-_51d_fetch-method-and-the-methods-i.patch
Description:  0001-FEAT-51d-htx-The-_51d_fetch-method-and-the-methods-i.patch


0002-MINOR-51d-Added-dummy-libraries-for-the-51Degrees-mo.patch
Description:  0002-MINOR-51d-Added-dummy-libraries-for-the-51Degrees-mo.patch


Re: [PATCH] BUILD: Silence gcc warning about unused return value

2019-06-12 Thread Vincent Bernat
 ❦ 12 juin 2019 20:47 +02, Tim Duesterhus :

> - write(2, trash.area, trash.data);
> + shut_your_big_mouth_gcc(write(2, trash.area, trash.data));

An alternative not discussed in 89efaed6b67b (which introduced this
function) is to use "(void)!":

 (void)!write(2, trash.area, trash.data);

For an entertaining discussion, see:

 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66425

Of course, like the "if (foo()) /* do nothing */;", this can be broken
in the future.
-- 
Use library functions.
- The Elements of Programming Style (Kernighan & Plauger)



[PATCH 1/1] BUG/MEDIUM: compression: Set Vary: Accept-Encoding if the response would be compressed

2019-06-12 Thread Tim Duesterhus
Make HAProxy set the `Vary: Accept-Encoding` response header if the server
response would normally be compressed based off the current configuration.

Specifically make sure to:
1. Disregard the *request* headers ...
2. Disregard the current compression rate and other temporary conditions ...
... when determining whether the `Vary` header must be set.

The *response* headers generated by the upstream are allowed to be taken
into account and this patch does take them into account to not set `Vary`
on e.g. Content-Types that are never going to be compressed.

A small issue remains: The User-Agent is not added to the `Vary` header,
despite being relevant to the response. Adding the User-Agent header would
make responses effectively uncacheable and it's unlikely to see a Mozilla/4
in the wild in 2019.

Add a reg-test to ensure this behaviour.
---
 reg-tests/compression/vary.vtc | 187 +
 src/flt_http_comp.c|  81 ++
 2 files changed, 248 insertions(+), 20 deletions(-)
 create mode 100644 reg-tests/compression/vary.vtc

diff --git a/reg-tests/compression/vary.vtc b/reg-tests/compression/vary.vtc
new file mode 100644
index 0..58514700f
--- /dev/null
+++ b/reg-tests/compression/vary.vtc
@@ -0,0 +1,187 @@
+varnishtest "Compression sets Vary header"
+
+#REQUIRE_VERSION=1.9
+#REQUIRE_OPTION=ZLIB|SLZ
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+expect req.url == "/plain/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/plain/accept-encoding-invalid"
+expect req.http.accept-encoding == "invalid"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/plain/accept-encoding-null"
+expect req.http.accept-encoding == ""
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/html/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/html/accept-encoding-invalid"
+expect req.http.accept-encoding == "invalid"
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+
+rxreq
+expect req.url == "/html/accept-encoding-null"
+expect req.http.accept-encoding == ""
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/dup-etag/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -hdr "ETag: \"123\"" \
+  -hdr "ETag: \"123\"" \
+  -bodylen 100
+} -repeat 2 -start
+
+
+haproxy h1 -conf {
+defaults
+mode http
+${no-htx} option http-use-htx
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe-gzip
+bind "fd@${fe_gzip}"
+default_backend be-gzip
+
+backend be-gzip
+compression algo gzip
+compression type text/plain
+server www ${s1_addr}:${s1_port}
+
+frontend fe-nothing
+bind "fd@${fe_nothing}"
+default_backend be-nothing
+
+backend be-nothing
+server www ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_gzip_sock} {
+txreq -url "/plain/accept-encoding-gzip" \
+  -hdr "Accept-Encoding: gzip"
+rxresp
+expect resp.status == 200
+expect resp.http.content-encoding == "gzip"
+expect resp.http.vary == "Accept-Encoding"
+gunzip
+expect resp.bodylen == 100
+
+txreq -url "/plain/accept-encoding-invalid" \
+  -hdr "Accept-Encoding: invalid"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == "Accept-Encoding"
+expect resp.bodylen == 100
+
+txreq -url "/plain/accept-encoding-null"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == "Accept-Encoding"
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-gzip" \
+  -hdr "Accept-Encoding: gzip"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == ""
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-invalid" \
+  -hdr "Accept-Encoding: invalid"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == ""
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-null"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == ""
+expect resp.bodylen == 100
+
+txreq -url 

[PATCH 0/1] compression: Set Vary: Accept-Encoding

2019-06-12 Thread Tim Duesterhus
Willy,

First things first:

Yes, I'm aware that this is shortly before 2.0. Either put this into 2.0
or put this into `next` if you consider it too risky (or just wait until
2.0 release before merging).

Also *add backporting instructions*, please. I recommend to backport it to
2.0 at the least.

That said:

This thread contains two "competing" patches to fix the BUG that HAProxy
does not set the `Vary` response header when the compression filter is
applied. When not setting the `Vary` header the response may be miscached
by intermediate caching proxies.

Please select the one you like better, because I wasn't sure. I'll explain
the differences below:

PATCH 1 (the one *without* v2):
---

This one attempts to only set the `Vary` response header when it's
*required* to not pollute responses that are never going to be compressed
based on the current configuration (e.g. because the Content-Type is not
listed in `compression type`).

To do so the patch adds a new `would_compress` flag and requires careful
checking in `htx_set_comp_reshdr`:

1. All the response conditions must go first.
2. Then the `would_compress` flag must be set.
3. Then the other conditions (e.g. compression rate) must be checked.

Otherwise the `would_compress` flag might be missing due to a temporary
condition, leading to a missing `Vary` header, leading to bugs.

PATCH 2 (the one *with* v2):


This one is more stupid: If compression is enabled for the proxy it sets
the `Vary` header unconditionally. Nothing can go wrong here, but it sets
the header for responses where it is not required. This type of responses
tend to be binary files which usually are rather large, taking up valuable
space in downstream proxies by possibly being cached multiple times (once
per `Accept-Encoding` request header).

Best regards

Tim Duesterhus (1):
  BUG/MEDIUM: compression: Set Vary: Accept-Encoding if the response
would be compressed

 reg-tests/compression/vary.vtc | 187 +
 src/flt_http_comp.c|  81 ++
 2 files changed, 248 insertions(+), 20 deletions(-)
 create mode 100644 reg-tests/compression/vary.vtc

-- 
2.21.0




[PATCH v2 1/1] BUG/MEDIUM: compression: Set Vary: Accept-Encoding if the response would be compressed

2019-06-12 Thread Tim Duesterhus
Willy,

read the cover letter of this thread before ignoring the first patch, just
because this one has a higher version number to avoid mistakes.

Apply with `git am --scissors` to automatically cut the commit message.
-- >8 --
Make HAProxy set the `Vary: Accept-Encoding` response header if the server
response would normally be compressed based off the current configuration.

Specifically make sure to:
1. Disregard the *request* headers ...
2. Disregard the current compression rate and other temporary conditions ...
... when determining whether the `Vary` header must be set.

The *response* headers generated by the upstream are allowed to be taken
into account. This patch *does not* though. Once compression is enabled
the `Vary` header will be set for *all* upstream responses.

A small issue remains: The User-Agent is not added to the `Vary` header,
despite being relevant to the response. Adding the User-Agent header would
make responses effectively uncacheable and it's unlikely to see a Mozilla/4
in the wild in 2019.

Add a reg-test to ensure this behaviour.
---
 reg-tests/compression/vary.vtc | 187 +
 src/flt_http_comp.c|  30 +-
 2 files changed, 213 insertions(+), 4 deletions(-)
 create mode 100644 reg-tests/compression/vary.vtc

diff --git a/reg-tests/compression/vary.vtc b/reg-tests/compression/vary.vtc
new file mode 100644
index 0..2f5d82d73
--- /dev/null
+++ b/reg-tests/compression/vary.vtc
@@ -0,0 +1,187 @@
+varnishtest "Compression sets Vary header"
+
+#REQUIRE_VERSION=1.9
+#REQUIRE_OPTION=ZLIB|SLZ
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+expect req.url == "/plain/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/plain/accept-encoding-invalid"
+expect req.http.accept-encoding == "invalid"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/plain/accept-encoding-null"
+expect req.http.accept-encoding == ""
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/html/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/html/accept-encoding-invalid"
+expect req.http.accept-encoding == "invalid"
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+
+rxreq
+expect req.url == "/html/accept-encoding-null"
+expect req.http.accept-encoding == ""
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/dup-etag/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -hdr "ETag: \"123\"" \
+  -hdr "ETag: \"123\"" \
+  -bodylen 100
+} -repeat 2 -start
+
+
+haproxy h1 -conf {
+defaults
+mode http
+${no-htx} option http-use-htx
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe-gzip
+bind "fd@${fe_gzip}"
+default_backend be-gzip
+
+backend be-gzip
+compression algo gzip
+compression type text/plain
+server www ${s1_addr}:${s1_port}
+
+frontend fe-nothing
+bind "fd@${fe_nothing}"
+default_backend be-nothing
+
+backend be-nothing
+server www ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_gzip_sock} {
+txreq -url "/plain/accept-encoding-gzip" \
+  -hdr "Accept-Encoding: gzip"
+rxresp
+expect resp.status == 200
+expect resp.http.content-encoding == "gzip"
+expect resp.http.vary == "Accept-Encoding"
+gunzip
+expect resp.bodylen == 100
+
+txreq -url "/plain/accept-encoding-invalid" \
+  -hdr "Accept-Encoding: invalid"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == "Accept-Encoding"
+expect resp.bodylen == 100
+
+txreq -url "/plain/accept-encoding-null"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == "Accept-Encoding"
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-gzip" \
+  -hdr "Accept-Encoding: gzip"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == "Accept-Encoding"
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-invalid" \
+  -hdr "Accept-Encoding: invalid"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == "Accept-Encoding"
+   

[PATCH] BUILD: Silence gcc warning about unused return value

2019-06-12 Thread Tim Duesterhus
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

complains:

> src/debug.c: In function ‘ha_panic’:
> src/debug.c:162:2: warning: ignoring return value of ‘write’, declared with 
> attribute warn_unused_result [-Wunused-result]
>  (void) write(2, trash.area, trash.data);
>^
---
 src/debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/debug.c b/src/debug.c
index fd760faac..059bc6b97 100644
--- a/src/debug.c
+++ b/src/debug.c
@@ -159,7 +159,7 @@ void ha_panic()
chunk_reset();
chunk_appendf(, "Thread %u is about to kill the process.\n", tid 
+ 1);
ha_thread_dump_all_to_trash();
-   write(2, trash.area, trash.data);
+   shut_your_big_mouth_gcc(write(2, trash.area, trash.data));
for (;;)
abort();
 }
-- 
2.21.0




Re: [PATCH] BUG/MINOR: checks: Include UNIX server paths in arguments

2019-06-12 Thread linuxdaemon Snoonet
Hi Willy,

Do you think we should abstract the formatting of the address to a function
and use the same format for the proxy address for external checks?
Currently my changes just mimic the proxy address logic for server address,
so if we want to add the ability to easily distinguish the socket types and
support abstract sockets, I think both proxy address and server address
parameters should be updated.

- linuxdaemon