Re: Disable client keep-alive using ACL

2020-11-18 Thread John Lauro
A couple of possible options...
You could use tcp-request inspect-delay to delay the response a number of
seconds (and accept it quick if legitimate traffic).
You could use redirects which will have the clients do more requests
(Possibly with the inspect delays).

That said, it would be useful to force a client connection closed at times,
but there are ways to protect the backends and slow some clients without
completely blocking them.

On Wed, Nov 18, 2020 at 3:14 AM Tim Düsterhus, WoltLab GmbH <
duester...@woltlab.com> wrote:

> Lukas,
>
>
> The reason is that we want to avoid outright blocking with e.g. a 429
> Too Many Requests, because that could affect legitimate traffic. Forcing
> the client to re-establish the connection should not be noticeable for a
> properly implemented client, other than an increased latency.
>
> I'm aware that this will be more costly for us as well, but we have
> plenty of spare capacity at the load balancers.
>
>
>


Re: [PATCH] simplify openssl async detection

2020-11-18 Thread Илья Шипицин
ping :) ?

сб, 14 нояб. 2020 г. в 02:04, Илья Шипицин :

> Hi.
>
> next define improvement.
>
> Ilya
>


Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Tim Düsterhus
Maciej,

Am 18.11.20 um 18:48 schrieb Maciej Zdeb:
> Tim thanks for the hint!

You're welcome.

> Aleksandar I’ll do my best, however I’m still learning HAProxy internals
> and refreshing my C skills after very long break. ;) First, I’ll try to
> deliver something simple like „-m beg” and after review from the team we’ll
> see.

As a community contributor that regularly sends patches for stuff and
that also sometimes reviews patches from other contributors:

I must say that you made a very good start with your first patch.
Christopher made some adjustments, but it's clear that you read the
CONTRIBUTING guide, because all the formal requirements were met. From
my experience many already fail at that stage.

Just add reg-tests for your next development and I'd say it's perfect
:-) I'm looking forward to see your patches, even if I don't need them
personally.

> If someone is in hurry with this issue and wants to implement it asap, then
> just let me know.
> 
Best regards
Tim Düsterhus



Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Maciej Zdeb
Tim thanks for the hint!

Aleksandar I’ll do my best, however I’m still learning HAProxy internals
and refreshing my C skills after very long break. ;) First, I’ll try to
deliver something simple like „-m beg” and after review from the team we’ll
see.

If someone is in hurry with this issue and wants to implement it asap, then
just let me know.

W dniu śr., 18.11.2020 o 16:49 Tim Düsterhus  napisał(a):

> Maciej,
>
> Am 18.11.20 um 14:22 schrieb Maciej Zdeb:
> > I've found an earlier discussion about replacing reqidel (and others) in
> > 2.x: https://www.mail-archive.com/haproxy@formilux.org/msg36321.html
> >
> > So basically we're lacking:
> > http-request del-header x-private-  -m beg
> > http-request del-header x-.*company -m reg
> > http-request del-header -tracea -m end
> >
> > I'll try to implement it in the free time.
>
> Please refer to this issue: https://github.com/haproxy/haproxy/issues/909
>
> Best regards
> Tim Düsterhus
>
>
>


Re: Logging mTLS handshake errors

2020-11-18 Thread Lukas Tribus
Hello Dominik,



On Wed, 18 Nov 2020 at 15:06, Froehlich, Dominik
 wrote:
>
> Hi everyone,
>
>
>
> Some of our customers are using mTLS to authenticate clients. There have been 
> complaints that some certificates don’t work
>
> but we don’t know why. To shed some light on the matter, I’ve tried to add 
> more info to our log format regarding TLS validation:

This is a know pain point:

https://github.com/haproxy/haproxy/issues/693



Lukas



Re: [PATCH v5 0/2] add set server ssl command

2020-11-18 Thread William Lallemand
On Sat, Nov 14, 2020 at 07:25:31PM +0100, William Dauchy wrote:
> Hello,
> 
> This patchset is an attempt to add a new command for configure ssl on
> server at runtime:
> 
> - the first patch is a simple preparation work
> - the second one is adding the new command. Now that I understand how
>   ssl backend connections are initialized, I change it to: init SSL
>   connection at startup. The command is only here to de/activate the SSL
>   connection.
> 
> remaining point for another patchset:
> - to follow up the work done on `show stats` with weight done by Willy,
>   I am thinking to display use_ssl in that command as well, completely
>   removing the use of `show servers state` for our own use case. As
>   stated by Willy, we however need to make sure not to display this
>   information in all cases as the stats page could be often public.
> 
> ---
> changed in v2:
> - patch1/4: reorder parameters to match format string
> - patch3/4: reorder includes, error introduced while splitting my patch.
> 
> changed in v3:
> - reorg to allow build without USE_OPENSSL
> 
> changed in v4:
> - init SSL ctx at process startup at it could not work because SSL
>   functions are accessing filesystem
> - slightly change no-ssl keyword behaviour to allow SSL connection init,
>   when being used with a default-server ssl setting
> 
> changed in v5:
> - improve commit message of patch 1/2
> - add test for the new set server ssl command
> 
> William Dauchy (2):
>   MINOR: ssl: create common ssl_ctx init
>   MEDIUM: cli/ssl: configure ssl on server at runtime
> 
>  doc/configuration.txt |  4 ++
>  doc/management.txt|  4 ++
>  include/haproxy/server-t.h|  7 ++-
>  include/haproxy/ssl_sock.h|  1 +
>  .../checks/1be_40srv_odd_health_checks.vtc|  2 +-
>  .../checks/40be_2srv_odd_health_checks.vtc|  2 +-
>  reg-tests/checks/4be_1srv_health_checks.vtc   |  6 +-
>  reg-tests/server/cli_set_ssl.vtc  | 54 +
>  src/cfgparse-ssl.c| 59 +--
>  src/cfgparse.c|  9 ++-
>  src/proxy.c   |  5 +-
>  src/server.c  | 41 -
>  src/ssl_sock.c| 17 ++
>  13 files changed, 165 insertions(+), 46 deletions(-)
>  create mode 100644 reg-tests/server/cli_set_ssl.vtc
> 

Thanks, now merged.

-- 
William Lallemand



Content Partnership

2020-11-18 Thread Tegan Johnson
Hi,

I’m Tegan Johnson from Nightfall , the industry's
first cloud-native data loss prevention platform.

Effective data security is critical in today’s businesses. Our team helps
organizations from various industries such as healthcare, education,
technology, or finance understand sensitive data from their applications by
providing them with best practices to keep their data organization secure
and prevent data loss.

Here’s an article we recently published:

Preventing S3 bucket Leaks with 5 Best Practices for AWS Cloud Security


Your audience at *blog.envoyproxy.io * might
find this useful so I thought you might be interested in sharing. Is this
something you can publish on your page?

Sending over a Google doc copy of the article for your convenience:

https://docs.google.com/document/d/1j8DSE1I1luRQz5hUCRe_Lx99ryPMu81XREksWd-_sK0/edit?usp=sharing




We would also be happy to help you promote your site across our social
media channels to help you broaden your reach. I could discuss it with you
in detail if you wish.

Looking forward to hearing from you. Wishing you a terrific day!

Kind regards,

Tegan


P.S. If you want to unsubscribe from this email, let us know and we'll
update our mailing list. Thank you!


[image: Logo]

Tegan Johnson

Partnership Team

phone: (415) 630-6212

email: tegan.johnson@nightfall.cloud

[image: Facebook icon]  [image: LinkedIn icon]  [image: Twitter icon]  [image:
Instagram icon]

The content of this email is confidential and intended for the recipient
specified in the message only. You can always unsubscribe by replying to
this message, including unsubscribe in the topic.

ᐧ


Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Tim Düsterhus
Maciej,

Am 18.11.20 um 14:22 schrieb Maciej Zdeb:
> I've found an earlier discussion about replacing reqidel (and others) in
> 2.x: https://www.mail-archive.com/haproxy@formilux.org/msg36321.html
> 
> So basically we're lacking:
> http-request del-header x-private-  -m beg
> http-request del-header x-.*company -m reg
> http-request del-header -tracea -m end
> 
> I'll try to implement it in the free time.

Please refer to this issue: https://github.com/haproxy/haproxy/issues/909

Best regards
Tim Düsterhus





Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Aleksandar Lazic

Hi Maciej.

On 18.11.20 14:22, Maciej Zdeb wrote:
I've found an earlier discussion about replacing reqidel (and others) in 2.x: https://www.mail-archive.com/haproxy@formilux.org/msg36321.html 


So basically we're lacking:
http-request del-header x-private-  -m beg
http-request del-header x-.*company -m reg
http-request del-header -tracea     -m end

I'll try to implement it in the free time.


If I'm allowed to raise a wish, even I know and respect your time and your 
passion.

Can you think to respectthe '-i'.
http://git.haproxy.org/?p=haproxy.git=search=HEAD=grep=PAT_MF_IGNORE_CASE

Additional Info.

What I have see in the the checking of '-i' (PAT_MF_IGNORE_CASE), the '-m reg' 
functions
have not the  PAT_MF_IGNORE_CASE check.

Maybe I'm wrong but is the '-i' respected by '-m reg' pattern, because I don't 
see the
'icase' variable in this functions or any other check for PAT_MF_IGNORE_CASE 
flag.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l569
struct pattern *pat_match_regm

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l596
struct pattern *pat_match_reg

This both functions uses 'regex_exec_match2()' where I also don't see the 
PAT_MF_IGNORE_CASE check
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/regex.c;hb=0217b7b24bb33d746d2bf625f5e894007517d1b0#l217

I have never used '-i' with regex so maybe it's a magic in the code which I 
don't recognize.

Regards
Aleks


śr., 18 lis 2020 o 13:20 Maciej Zdeb mailto:mac...@zdeb.pl>> 
napisał(a):

Sure, the biggest problem is to delete header by matching prefix:

load_blacklist = function(service)
     local prefix = '/etc/haproxy/configs/maps/header_blacklist'
     local blacklist = {}

     blacklist.req = {}
     blacklist.res = {}
     blacklist.req.str = Map.new(string.format('%s_%s_req.map', prefix, 
service), Map._str)
     blacklist.req.beg = Map.new(string.format('%s_%s_req_beg.map', prefix, 
service), Map._beg)

     return blacklist
end

blacklist = {}
blacklist.testsite = load_blacklist('testsite')

is_denied = function(bl, name)
     return bl ~= nil and (bl.str:lookup(name) ~= nil or 
bl.beg:lookup(name) ~= nil)
end

req_header_filter = function(txn, service)
         local req_headers = txn.http:req_get_headers()
         for name, _ in pairs(req_headers) do
                 if is_denied(blacklist[service].req, name) then
                         txn.http:req_del_header(name)
                 end
         end
end

core.register_action('req_header_filter', { 'http-req' }, 
req_header_filter, 1)

śr., 18 lis 2020 o 12:46 Julien Pivotto mailto:roidelapl...@inuits.eu>> napisał(a):

On 18 Nov 12:33, Maciej Zdeb wrote:
 > Hi again,
 >
 > So "# some headers manipulation, nothing different then on other 
clusters"
 > was the important factor in config. Under this comment I've hidden 
from you
 > one of our LUA scripts that is doing header manipulation like 
deleting all
 > headers from request when its name begins with "abc*". We're doing 
it on
 > all HAProxy servers, but only here it has such a big impact on the 
CPU,
 > because of huge RPS.
 >
 > If I understand correctly:
 > with nbproc = 20, lua interpreter worked on every process
 > with nbproc=1, nbthread=20, lua interpreter works on single 
process/thread
 >
 > I suspect that running lua on multiple threads is not a trivial 
task...

If you can share your lua script maybe we can see if this is doable
more natively in haproxy

 >
 >
 >
 >
 > wt., 17 lis 2020 o 15:50 Maciej Zdeb mailto:mac...@zdeb.pl>> napisał(a):
 >
 > > Hi,
 > >
 > > We're in a process of migration from HAProxy[2.2.5] working on 
multiple
 > > processes to multiple threads. Additional motivation came from the
 > > announcement that the "nbproc" directive was marked as deprecated 
and will
 > > be killed in 2.5.
 > >
 > > Mostly the migration went smoothly but on one of our clusters the 
CPU
 > > usage went so high that we were forced to rollback to nbproc. 
There is
 > > nothing unusual in the config, but the traffic on this particular 
cluster
 > > is quite unusual.
 > >
 > > With nbproc set to 20 CPU idle drops at most to 70%, with nbthread 
= 20
 > > after a couple of minutes at idle 50% it drops to 0%. HAProxy
 > > processes/threads are working on dedicated/isolated CPU cores.
 > >
 > > [image: image.png]
 > >
 > > I mentioned that traffic is quite unusual, because most of it are 
http
 > > requests with some payload in headers and very very small 
responses (like
 

Logging mTLS handshake errors

2020-11-18 Thread Froehlich, Dominik
Hi everyone,

Some of our customers are using mTLS to authenticate clients. There have been 
complaints that some certificates don’t work
but we don’t know why. To shed some light on the matter, I’ve tried to add more 
info to our log format regarding TLS validation:

log-format "%ci:%cp [%tr] (%ID) %ft %b/%s %TR/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS 
%tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %sslc %sslv %[ssl_fc_has_sni] 
%[ssl_c_used] %[ssl_fc_has_crt] %[ssl_c_err] %[ssl_c_ca_err]"


The new elements are

%[ssl_fc_has_sni] %[ssl_c_used] %[ssl_fc_has_crt] %[ssl_c_err] %[ssl_c_ca_err]

As I wanted to know if there is a validation error I added ssl_c_err so I would 
be able to look it up in openssl later.

However, whenever I try the config with a bad certificate (e.g. expired, not 
yet valid, etc.) I don’t see the log entry at all.
Instead I just get:

https-in/1: SSL client certificate not trusted

Only after I added

crt-ignore-err all

to the bind directive, I did see the actual error code in the log. But then, 
the certificate would always validate which is not what I want of course.

Any chance to get a meaningful log message on bad certificates?


Best regards,
Dominik


Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Maciej Zdeb
I've found an earlier discussion about replacing reqidel (and others) in
2.x: https://www.mail-archive.com/haproxy@formilux.org/msg36321.html

So basically we're lacking:
http-request del-header x-private-  -m beg
http-request del-header x-.*company -m reg
http-request del-header -tracea -m end

I'll try to implement it in the free time.

śr., 18 lis 2020 o 13:20 Maciej Zdeb  napisał(a):

> Sure, the biggest problem is to delete header by matching prefix:
>
> load_blacklist = function(service)
> local prefix = '/etc/haproxy/configs/maps/header_blacklist'
> local blacklist = {}
>
> blacklist.req = {}
> blacklist.res = {}
> blacklist.req.str = Map.new(string.format('%s_%s_req.map', prefix,
> service), Map._str)
> blacklist.req.beg = Map.new(string.format('%s_%s_req_beg.map', prefix,
> service), Map._beg)
>
> return blacklist
> end
>
> blacklist = {}
> blacklist.testsite = load_blacklist('testsite')
>
> is_denied = function(bl, name)
> return bl ~= nil and (bl.str:lookup(name) ~= nil or
> bl.beg:lookup(name) ~= nil)
> end
>
> req_header_filter = function(txn, service)
> local req_headers = txn.http:req_get_headers()
> for name, _ in pairs(req_headers) do
> if is_denied(blacklist[service].req, name) then
> txn.http:req_del_header(name)
> end
> end
> end
>
> core.register_action('req_header_filter', { 'http-req' },
> req_header_filter, 1)
>
> śr., 18 lis 2020 o 12:46 Julien Pivotto 
> napisał(a):
>
>> On 18 Nov 12:33, Maciej Zdeb wrote:
>> > Hi again,
>> >
>> > So "# some headers manipulation, nothing different then on other
>> clusters"
>> > was the important factor in config. Under this comment I've hidden from
>> you
>> > one of our LUA scripts that is doing header manipulation like deleting
>> all
>> > headers from request when its name begins with "abc*". We're doing it on
>> > all HAProxy servers, but only here it has such a big impact on the CPU,
>> > because of huge RPS.
>> >
>> > If I understand correctly:
>> > with nbproc = 20, lua interpreter worked on every process
>> > with nbproc=1, nbthread=20, lua interpreter works on single
>> process/thread
>> >
>> > I suspect that running lua on multiple threads is not a trivial task...
>>
>> If you can share your lua script maybe we can see if this is doable
>> more natively in haproxy
>>
>> >
>> >
>> >
>> >
>> > wt., 17 lis 2020 o 15:50 Maciej Zdeb  napisał(a):
>> >
>> > > Hi,
>> > >
>> > > We're in a process of migration from HAProxy[2.2.5] working on
>> multiple
>> > > processes to multiple threads. Additional motivation came from the
>> > > announcement that the "nbproc" directive was marked as deprecated and
>> will
>> > > be killed in 2.5.
>> > >
>> > > Mostly the migration went smoothly but on one of our clusters the CPU
>> > > usage went so high that we were forced to rollback to nbproc. There is
>> > > nothing unusual in the config, but the traffic on this particular
>> cluster
>> > > is quite unusual.
>> > >
>> > > With nbproc set to 20 CPU idle drops at most to 70%, with nbthread =
>> 20
>> > > after a couple of minutes at idle 50% it drops to 0%. HAProxy
>> > > processes/threads are working on dedicated/isolated CPU cores.
>> > >
>> > > [image: image.png]
>> > >
>> > > I mentioned that traffic is quite unusual, because most of it are http
>> > > requests with some payload in headers and very very small responses
>> (like
>> > > 200 OK). On multi-proc setup HAProxy handles about 20 to 30k of
>> connections
>> > > (on frontend and backend) and about 10-20k of http requests. Incoming
>> > > traffic is just about 100-200Mbit/s and outgoing 40-100Mbit/s from
>> frontend
>> > > perspective.
>> > >
>> > > Did someone experience similar behavior of HAProxy? I'll try to
>> collect
>> > > more data and generate similar traffic with sample config to show a
>> > > difference in performance between nbproc and nbthread.
>> > >
>> > > I'll greatly appreciate any hints on what I should focus. :)
>> > >
>> > > Current config is close to:
>> > > frontend front
>> > > mode http
>> > > option http-keep-alive
>> > > http-request add-header X-Forwarded-For %[src]
>> > >
>> > > # some headers manipulation, nothing different then on other
>> clusters
>> > >
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 1
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 2
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 3
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 4
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 5
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 6
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 7
>> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 8
>> > >  

Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Maciej Zdeb
Sure, the biggest problem is to delete header by matching prefix:

load_blacklist = function(service)
local prefix = '/etc/haproxy/configs/maps/header_blacklist'
local blacklist = {}

blacklist.req = {}
blacklist.res = {}
blacklist.req.str = Map.new(string.format('%s_%s_req.map', prefix,
service), Map._str)
blacklist.req.beg = Map.new(string.format('%s_%s_req_beg.map', prefix,
service), Map._beg)

return blacklist
end

blacklist = {}
blacklist.testsite = load_blacklist('testsite')

is_denied = function(bl, name)
return bl ~= nil and (bl.str:lookup(name) ~= nil or bl.beg:lookup(name)
~= nil)
end

req_header_filter = function(txn, service)
local req_headers = txn.http:req_get_headers()
for name, _ in pairs(req_headers) do
if is_denied(blacklist[service].req, name) then
txn.http:req_del_header(name)
end
end
end

core.register_action('req_header_filter', { 'http-req' },
req_header_filter, 1)

śr., 18 lis 2020 o 12:46 Julien Pivotto  napisał(a):

> On 18 Nov 12:33, Maciej Zdeb wrote:
> > Hi again,
> >
> > So "# some headers manipulation, nothing different then on other
> clusters"
> > was the important factor in config. Under this comment I've hidden from
> you
> > one of our LUA scripts that is doing header manipulation like deleting
> all
> > headers from request when its name begins with "abc*". We're doing it on
> > all HAProxy servers, but only here it has such a big impact on the CPU,
> > because of huge RPS.
> >
> > If I understand correctly:
> > with nbproc = 20, lua interpreter worked on every process
> > with nbproc=1, nbthread=20, lua interpreter works on single
> process/thread
> >
> > I suspect that running lua on multiple threads is not a trivial task...
>
> If you can share your lua script maybe we can see if this is doable
> more natively in haproxy
>
> >
> >
> >
> >
> > wt., 17 lis 2020 o 15:50 Maciej Zdeb  napisał(a):
> >
> > > Hi,
> > >
> > > We're in a process of migration from HAProxy[2.2.5] working on multiple
> > > processes to multiple threads. Additional motivation came from the
> > > announcement that the "nbproc" directive was marked as deprecated and
> will
> > > be killed in 2.5.
> > >
> > > Mostly the migration went smoothly but on one of our clusters the CPU
> > > usage went so high that we were forced to rollback to nbproc. There is
> > > nothing unusual in the config, but the traffic on this particular
> cluster
> > > is quite unusual.
> > >
> > > With nbproc set to 20 CPU idle drops at most to 70%, with nbthread = 20
> > > after a couple of minutes at idle 50% it drops to 0%. HAProxy
> > > processes/threads are working on dedicated/isolated CPU cores.
> > >
> > > [image: image.png]
> > >
> > > I mentioned that traffic is quite unusual, because most of it are http
> > > requests with some payload in headers and very very small responses
> (like
> > > 200 OK). On multi-proc setup HAProxy handles about 20 to 30k of
> connections
> > > (on frontend and backend) and about 10-20k of http requests. Incoming
> > > traffic is just about 100-200Mbit/s and outgoing 40-100Mbit/s from
> frontend
> > > perspective.
> > >
> > > Did someone experience similar behavior of HAProxy? I'll try to collect
> > > more data and generate similar traffic with sample config to show a
> > > difference in performance between nbproc and nbthread.
> > >
> > > I'll greatly appreciate any hints on what I should focus. :)
> > >
> > > Current config is close to:
> > > frontend front
> > > mode http
> > > option http-keep-alive
> > > http-request add-header X-Forwarded-For %[src]
> > >
> > > # some headers manipulation, nothing different then on other
> clusters
> > >
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 1
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 2
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 3
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 4
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 5
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 6
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 7
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 8
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process 9
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process
> > > 10
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process
> > > 11
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process
> > > 12
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process
> > > 13
> > > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
> process
> > > 14
> > > bind x.x.x.x:443 ssl 

Re: [2.2.5] High cpu usage after switch to threads

2020-11-18 Thread Julien Pivotto
On 18 Nov 12:33, Maciej Zdeb wrote:
> Hi again,
> 
> So "# some headers manipulation, nothing different then on other clusters"
> was the important factor in config. Under this comment I've hidden from you
> one of our LUA scripts that is doing header manipulation like deleting all
> headers from request when its name begins with "abc*". We're doing it on
> all HAProxy servers, but only here it has such a big impact on the CPU,
> because of huge RPS.
> 
> If I understand correctly:
> with nbproc = 20, lua interpreter worked on every process
> with nbproc=1, nbthread=20, lua interpreter works on single process/thread
> 
> I suspect that running lua on multiple threads is not a trivial task...

If you can share your lua script maybe we can see if this is doable
more natively in haproxy

> 
> 
> 
> 
> wt., 17 lis 2020 o 15:50 Maciej Zdeb  napisał(a):
> 
> > Hi,
> >
> > We're in a process of migration from HAProxy[2.2.5] working on multiple
> > processes to multiple threads. Additional motivation came from the
> > announcement that the "nbproc" directive was marked as deprecated and will
> > be killed in 2.5.
> >
> > Mostly the migration went smoothly but on one of our clusters the CPU
> > usage went so high that we were forced to rollback to nbproc. There is
> > nothing unusual in the config, but the traffic on this particular cluster
> > is quite unusual.
> >
> > With nbproc set to 20 CPU idle drops at most to 70%, with nbthread = 20
> > after a couple of minutes at idle 50% it drops to 0%. HAProxy
> > processes/threads are working on dedicated/isolated CPU cores.
> >
> > [image: image.png]
> >
> > I mentioned that traffic is quite unusual, because most of it are http
> > requests with some payload in headers and very very small responses (like
> > 200 OK). On multi-proc setup HAProxy handles about 20 to 30k of connections
> > (on frontend and backend) and about 10-20k of http requests. Incoming
> > traffic is just about 100-200Mbit/s and outgoing 40-100Mbit/s from frontend
> > perspective.
> >
> > Did someone experience similar behavior of HAProxy? I'll try to collect
> > more data and generate similar traffic with sample config to show a
> > difference in performance between nbproc and nbthread.
> >
> > I'll greatly appreciate any hints on what I should focus. :)
> >
> > Current config is close to:
> > frontend front
> > mode http
> > option http-keep-alive
> > http-request add-header X-Forwarded-For %[src]
> >
> > # some headers manipulation, nothing different then on other clusters
> >
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 1
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 2
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 3
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 4
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 5
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 6
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 7
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 8
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process 9
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 10
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 11
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 12
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 13
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 14
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 15
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 16
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 17
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 18
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 19
> > bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1 process
> > 20
> > default_backend back
> >
> > backend back
> > option http-keep-alive
> > mode http
> > http-reuse always
> > option httpchk GET /health HTTP/1.0\r\nHost:\ example.com
> > http-check expect string OK
> >
> > server slot_0_checker 10.x.x.x:31180 check weight 54
> > server slot_1_checker 10.x.x.x:31146 check weight 33
> > server slot_2_checker 10.x.x.x:31313 check weight 55
> > server slot_3_checker 10.x.x.x:31281 check weight 33 disabled
> > server slot_4_checker 10.x.x.x:31717 check weight 55
> > server slot_5_checker 10.x.x.x:31031 check weight 76
> > server slot_6_checker 10.x.x.x:31124 check weight 50
> > server slot_7_checker 10.x.x.x:31353 check weight 48
> > server slot_8_checker 10.x.x.x:31839 check weight 33
> > 

Re : haproxy.org : Budget-Friendly SEO Packages..

2020-11-18 Thread Mary Hernandez
Hi *haproxy.org  *Owner



I want to reach out to you to learn whether you are in need of "SEO /
Digital Marketing / website development / maintenance / re-design / user
experience mapping services" for your *haproxy.org *.



I would like to offer you our work portfolio, client testimonials on
notice. Moreover, I can deliver a fully responsive website within 7
business days.



Is this something you are interested in?

If yes, please allow me to send you a no obligation audit report and quote.

Hoping to hear from you and take this partnership ahead.

*Best Regards,*

*Mary Hernandez**|SEO Consultant*


I also prepared a free website audit report for your website. If you are
interested i can show you the report. I'd be happy to send you our package,
pricing and past work details, if you'd like to assess our work.
[image: beacon]


De netdev.nl gaat in de aanbieding

2020-11-18 Thread Luise Mol
Geachte heer of mevrouw,


Ik verkoop de DomeinNaam netdev.nl.

Is dat interessant voor U?

Met vriendelijke groeten,

Luise Mol



Re: Disable client keep-alive using ACL

2020-11-18 Thread Tim Düsterhus , WoltLab GmbH
Lukas,

Am 17.11.20 um 17:37 schrieb Lukas Tribus:
>>> is it possible to reliably disable client keep-alive on demand based on
>>> the result of an ACL?
>>>
>>> I was successful for HTTP/1 requests by using:
>>>
>>> http-after-response set-header connection close if foo
>>>
>>> But apparently that has no effect for HTTP/2 requests. I was unable to
>>> find anything within the documentation with regard to this either.
> 
> I don't think there is a way. In HTTP/2 you'd need to send a GOAWAY
> message to close the connection. There are no instructions in the HTTP
> headers regarding the connection.

I would be fine with some other method of communicating the Close /
GOAWAY, e.g. using http-response stop-serving-this-client.

> I *think/hope* we are actually sending GOAWAY messages when:
> 
> - some timeouts are reached
> - hard-stop-after triggers
> - a "shutdown session ..." is triggered
> 
> 
> You could check if sending a "421 Misdirected Request" error to the
> client could achieve your goal, but it certainly behaves differently
> than a close in H1 (you can't get a successful answer to the client).
> It's also a workaround.

It will not. What I was attempting to do was forcing clients to
re-establish the TCP connection after every request when rate limits are
exceeded to slow them down and / or spend more resources.

The reason is that we want to avoid outright blocking with e.g. a 429
Too Many Requests, because that could affect legitimate traffic. Forcing
the client to re-establish the connection should not be noticeable for a
properly implemented client, other than an increased latency.

I'm aware that this will be more costly for us as well, but we have
plenty of spare capacity at the load balancers.

> Triggering GOAWAY/full H2 connection teardown dynamically would need
> to be implemented. I think in HTX all connection headers are
> immediately dropped (they are not "translated" and applied to the
> connection).
> 

Can you (or anyone else) comment on whether that would actually be
feasible? I would create a feature request in the tracker then. If not
I'll save the effort. If I can get some pointers I might even have a
stab at implementing this myself.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P