Load Balancing Software Users

2019-11-19 Thread Craig Wilson
Good Day, I would like to know if you are interested in reaching out “Load Balancing Software Users“. If you would like to see a few examples I can send you the names of a few firms that use the specific technologies for review. Looking forward to helping you build new revenue streams for

Re: [PATCH] BUG/MINOR: init: fix set-dumpable when using uid/gid

2019-11-19 Thread Willy Tarreau
On Tue, Nov 19, 2019 at 10:11:36AM +0100, William Dauchy wrote: > Here is the backport for haproxy-20 tree. Now merged, thanks William. Willy

HAProxy question

2019-11-19 Thread Micael Gillet
Hello, As part of a project, I have some questions about HAProxy's abilities. Could you confirm if HAProxy is able to handle the following points? 1. STP Protection (RSTP) 2. VLANs interfaces 3. HA Cluster in Active / Passive mode 4. SNMP for monitoring 5.

Re: [PATCH] BUG/MINOR: init: fix set-dumpable when using uid/gid

2019-11-19 Thread William Dauchy
Hi, On Tue, Nov 19, 2019 at 02:42:23PM +0500, Илья Шипицин wrote: > small question. > `/proc/sys/fs/suid_dumpable` is linux specific. will it work under freebsd, > openbsd ? windows ? > also, linux might not mount that filesystem. will it work ? this code is protected around USE_PRCTL define,

[PATCH] BUG/MINOR: init: fix set-dumpable when using uid/gid

2019-11-19 Thread William Dauchy
in mworker mode used with uid/gid settings, it was not possible to get a coredump despite the set-dumpable option. indeed prctl(2) manual page specifies the dumpable attribute is reverted to `/proc/sys/fs/suid_dumpable` in a few conditions such as process effective user and group are changed.

Re: [PATCH] BUG/MINOR: init: fix set-dumpable when using uid/gid

2019-11-19 Thread Илья Шипицин
вт, 19 нояб. 2019 г. в 14:15, William Dauchy : > in mworker mode used with uid/gid settings, it was not possible to get > a coredump despite the set-dumpable option. > indeed prctl(2) manual page specifies the dumpable attribute is reverted > to `/proc/sys/fs/suid_dumpable` in a few conditions

http-buffer-request details

2019-11-19 Thread Илья Шипицин
hello, how is that supposed to work ? https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt#L6225 does it buffer the entire body ? does it use memory / hdd for buffering ? how are those buffers allocated ? what if I do not have a lot of RAM ? thanks, Ilya Shipitcin

Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-19 Thread Christopher Faulet
Hi William, I missed Pierre's email. I'm CCing him. Le 18/11/2019 à 21:00, William Dauchy a écrit : Thanks. Having a way to filter metrics in the Prometheus exporter was on my todo-list :) Filtering on scopes is pretty simple and it is a good start to solve performance issues for huge configs.

Re: HAProxy question

2019-11-19 Thread Aleksandar Lazic
Hi. Nov 19, 2019 11:05:34 AM Micael Gillet : > Hello, As part of a project, I have some questions about HAProxy's abilities. > Could you confirm if HAProxy is able to handle the following points? > > * STP Protection (RSTP) > * VLANs interfaces This is to low level for HAProxy, IMHO. >

Re: Firewall and Haproxy

2019-11-19 Thread Baptiste
On Sun, Nov 17, 2019 at 2:41 PM TomK wrote: > Hey All, > > When adding hosts to a F/W behind a VIP (keepalived for example) to > which Haproxy is bound, should just the VIP be added to the F/W or would > all member hosts behind Haproxy need to be added as well? > > If all member hosts behind

master-worker no-exit-on-failure with SO_REUSEPORT and a port being already in use

2019-11-19 Thread Christian Ruppert
Hi list, I'm facing some issues with already in use ports and the fallback feature, during a reload. SO_REUSEPORT already makes ist easier/better but not perfect, as there are still cases were it fails. In my test case I've got a Squid running on port 80 and a HAProxy with "master-worker

Re: master-worker no-exit-on-failure with SO_REUSEPORT and a port being already in use

2019-11-19 Thread William Lallemand
On Tue, Nov 19, 2019 at 03:45:09PM +0100, Christian Ruppert wrote: > Hi list, > Hello, > I'm facing some issues with already in use ports and the fallback > feature, during a reload. SO_REUSEPORT already makes ist easier/better > but not perfect, as there are still cases were it fails. > In

Re: http-buffer-request details

2019-11-19 Thread Tim Düsterhus
Christopher, Am 19.11.19 um 16:23 schrieb Christopher Faulet: > As mentioned in the documentation, HTTP processing is delayed waiting > the whole body is received or the request buffer is full. The condition > about the first chunk of a chunked request is only valid for the legacy > HTTP mode. It

Re: http-buffer-request details

2019-11-19 Thread Christopher Faulet
Le 19/11/2019 à 16:32, Tim Düsterhus a écrit : Christopher, Am 19.11.19 um 16:23 schrieb Christopher Faulet: As mentioned in the documentation, HTTP processing is delayed waiting the whole body is received or the request buffer is full. The condition about the first chunk of a chunked request

Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-19 Thread Christopher Faulet
Le 19/11/2019 à 14:51, Christopher Faulet a écrit : Regarding the problem of servers in maintenance, since we parse the query-string, it is possible to add more filters. I may add a parameter to filter out servers in maintenance. For instance, by passing "no-maint" in the query-string, all

RE: native prometheus exporter: retrieving check_status

2019-11-19 Thread Pierre Cheynier
> Hi Pierre, Hi!, > I addressed this issue based on a William's idea. I also proposed to add a > filter to exclude all servers in maintenance from the export. Let me know if > you > see a better way to do so. For the moment, from the exporter point of view, > it > is not really hard to do

Re: native prometheus exporter: retrieving check_status

2019-11-19 Thread William Dauchy
On Tue, Nov 19, 2019 at 03:31:28PM +0100, Christopher Faulet wrote: > > * also for `check_status`, there is the case of L7STS and its associated > > values that are present in another field. Most probably it could benefit > > from a better representation in a prometheus output (thanks to labels)?

Re: [PATCH] MINOR: contrib/prometheus-exporter: allow to select the exported metrics

2019-11-19 Thread William Dauchy
Hi Christopher, On Tue, Nov 19, 2019 at 04:35:47PM +0100, Christopher Faulet wrote: > Here is updated patches with the support for "scope" and "no-maint" > parameters. If this solution is good enough for you (and if it works :), I > will push it. this looks good to me and the test was conclusive

Re: native prometheus exporter: retrieving check_status

2019-11-19 Thread Christopher Faulet
Hi Pierre, Sorry I missed you email. Thanks to William for the reminder. Le 15/11/2019 à 15:55, Pierre Cheynier a écrit : We've recently tried to switch to the native prometheus exporter, but went quickly stopped in our initiative given the output on one of our preprod server: $ wc -l

Re: http-buffer-request details

2019-11-19 Thread Christopher Faulet
Le 19/11/2019 à 13:20, Илья Шипицин a écrit : hello, how is that supposed to work ? https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt#L6225 does it buffer the entire body ? does it use memory / hdd for buffering ? how are those buffers allocated ? what if I do not have a

Re: http-buffer-request details

2019-11-19 Thread Tim Düsterhus
Christopher, Am 19.11.19 um 16:39 schrieb Christopher Faulet: > You're right Tim. But this one is small enough to be fixed immediately > :). I will push the patch with other ones I'm working on. But it is > already fixed. > That's even better of course! Disregard my comment in *this specific*

Re: master-worker no-exit-on-failure with SO_REUSEPORT and a port being already in use

2019-11-19 Thread William Lallemand
On Tue, Nov 19, 2019 at 04:19:26PM +0100, William Lallemand wrote: > > I then add another bind for port 80, which is in use by squid already > > and try to reload HAProxy. It takes some time until it failes: > > > > Nov 19 14:39:21 894a0f616fec haproxy[2978]: [WARNING] 322/143921 (2978) > > :

RE: travis-ci: should we drop openssl-1.1.0 and replace it with 3.0 ?

2019-11-19 Thread Gibson, Brian (IMS)
Maybe after they stop security fixes we can drop 1.1.0. I know there are many distributions still in support that use this branch. 3.0 doesn’t exist yet, and won’t until later in 2020 which is unfortunate since that means there will be no FIPS validated branch for several months. From: Илья

travis-ci: should we drop openssl-1.1.0 and replace it with 3.0 ?

2019-11-19 Thread Илья Шипицин
hello, https://www.openssl.org/source/ says "The 1.1.0 series is currently only receiving security fixes and will go out of support on 11th September 2019" what if we drop it ? and replace with 3.0 ? cheers, Ilya Shipitcin

Re: travis-ci: should we drop openssl-1.1.0 and replace it with 3.0 ?

2019-11-19 Thread Илья Шипицин
well, we can actually build bigger matrix by adding builds. I just want to save some electricity on non needed builds. вт, 19 нояб. 2019 г. в 22:41, Илья Шипицин : > hello, > > https://www.openssl.org/source/ says "The 1.1.0 series is currently only > receiving security fixes and will go out of

Re: travis-ci: should we drop openssl-1.1.0 and replace it with 3.0 ?

2019-11-19 Thread Илья Шипицин
yep, 3.0 stands for openssl master branch. the point is to catch incompatibilities before it is released. вт, 19 нояб. 2019 г. в 22:51, Gibson, Brian (IMS) : > Maybe after they stop security fixes we can drop 1.1.0. I know there are > many distributions still in support that use this branch.

Re: travis-ci: should we drop openssl-1.1.0 and replace it with 3.0 ?

2019-11-19 Thread Lukas Tribus
Hello, On Tuesday, 19 November 2019, Илья Шипицин wrote: > yep, 3.0 stands for openssl master branch. > the point is to catch incompatibilities before it is released. > I am objecting to this. This can be done WHEN openssl declares that the API is stable. Testing and implementing build fixes