I've found an earlier discussion about replacing reqidel (and others) in
2.x: https://www.mail-archive.com/[email protected]/msg36321.html

So basically we're lacking:
http-request del-header x-private-  -m beg
http-request del-header x-.*company -m reg
http-request del-header -tracea     -m end

I'll try to implement it in the free time.

śr., 18 lis 2020 o 13:20 Maciej Zdeb <[email protected]> napisał(a):

> Sure, the biggest problem is to delete header by matching prefix:
>
> load_blacklist = function(service)
>     local prefix = '/etc/haproxy/configs/maps/header_blacklist'
>     local blacklist = {}
>
>     blacklist.req = {}
>     blacklist.res = {}
>     blacklist.req.str = Map.new(string.format('%s_%s_req.map', prefix,
> service), Map._str)
>     blacklist.req.beg = Map.new(string.format('%s_%s_req_beg.map', prefix,
> service), Map._beg)
>
>     return blacklist
> end
>
> blacklist = {}
> blacklist.testsite = load_blacklist('testsite')
>
> is_denied = function(bl, name)
>     return bl ~= nil and (bl.str:lookup(name) ~= nil or
> bl.beg:lookup(name) ~= nil)
> end
>
> req_header_filter = function(txn, service)
>         local req_headers = txn.http:req_get_headers()
>         for name, _ in pairs(req_headers) do
>                 if is_denied(blacklist[service].req, name) then
>                         txn.http:req_del_header(name)
>                 end
>         end
> end
>
> core.register_action('req_header_filter', { 'http-req' },
> req_header_filter, 1)
>
> śr., 18 lis 2020 o 12:46 Julien Pivotto <[email protected]>
> napisał(a):
>
>> On 18 Nov 12:33, Maciej Zdeb wrote:
>> > Hi again,
>> >
>> > So "# some headers manipulation, nothing different then on other
>> clusters"
>> > was the important factor in config. Under this comment I've hidden from
>> you
>> > one of our LUA scripts that is doing header manipulation like deleting
>> all
>> > headers from request when its name begins with "abc*". We're doing it on
>> > all HAProxy servers, but only here it has such a big impact on the CPU,
>> > because of huge RPS.
>> >
>> > If I understand correctly:
>> > with nbproc = 20, lua interpreter worked on every process
>> > with nbproc=1, nbthread=20, lua interpreter works on single
>> process/thread
>> >
>> > I suspect that running lua on multiple threads is not a trivial task...
>>
>> If you can share your lua script maybe we can see if this is doable
>> more natively in haproxy
>>
>> >
>> >
>> >
>> >
>> > wt., 17 lis 2020 o 15:50 Maciej Zdeb <[email protected]> napisał(a):
>> >
>> > > Hi,
>> > >
>> > > We're in a process of migration from HAProxy[2.2.5] working on
>> multiple
>> > > processes to multiple threads. Additional motivation came from the
>> > > announcement that the "nbproc" directive was marked as deprecated and
>> will
>> > > be killed in 2.5.
>> > >
>> > > Mostly the migration went smoothly but on one of our clusters the CPU
>> > > usage went so high that we were forced to rollback to nbproc. There is
>> > > nothing unusual in the config, but the traffic on this particular
>> cluster
>> > > is quite unusual.
>> > >
>> > > With nbproc set to 20 CPU idle drops at most to 70%, with nbthread =
>> 20
>> > > after a couple of minutes at idle 50% it drops to 0%. HAProxy
>> > > processes/threads are working on dedicated/isolated CPU cores.
>> > >
>> > > [image: image.png]
>> > >
>> > > I mentioned that traffic is quite unusual, because most of it are http
>> > > requests with some payload in headers and very very small responses
>> (like
>> > > 200 OK). On multi-proc setup HAProxy handles about 20 to 30k of
>> connections
>> > > (on frontend and backend) and about 10-20k of http requests. Incoming
>> > > traffic is just about 100-200Mbit/s and outgoing 40-100Mbit/s from
>> frontend
>> > > perspective.
>> > >
>> > > Did someone experience similar behavior of HAProxy? I'll try to
>> collect
>> > > more data and generate similar traffic with sample config to show a
>> > > difference in performance between nbproc and nbthread.
>> > >
>> > > I'll greatly appreciate any hints on what I should focus. :)
>> > >
>> > > Current config is close to:
>> > > frontend front
>> > >     mode http
>> > >     option http-keep-alive
>> > >     http-request add-header X-Forwarded-For %[src]
>> > >
>> > >     # some headers manipulation, nothing different then on other
>> clusters
>> > >
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 1
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 2
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 3
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 4
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 5
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 6
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 7
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 8
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process 9
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 10
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 11
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 12
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 13
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 14
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 15
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 16
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 17
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 18
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 19
>> > >     bind x.x.x.x:443 ssl crt /etc/cert/a.pem.pem alpn h2,http/1.1
>> process
>> > > 20
>> > > default_backend back
>> > >
>> > > backend back
>> > >     option http-keep-alive
>> > >     mode http
>> > >     http-reuse always
>> > >     option httpchk GET /health HTTP/1.0\r\nHost:\ example.com
>> > >     http-check expect string OK
>> > >
>> > >     server slot_0_checker 10.x.x.x:31180 check weight 54
>> > >     server slot_1_checker 10.x.x.x:31146 check weight 33
>> > >     server slot_2_checker 10.x.x.x:31313 check weight 55
>> > >     server slot_3_checker 10.x.x.x:31281 check weight 33 disabled
>> > >     server slot_4_checker 10.x.x.x:31717 check weight 55
>> > >     server slot_5_checker 10.x.x.x:31031 check weight 76
>> > >     server slot_6_checker 10.x.x.x:31124 check weight 50
>> > >     server slot_7_checker 10.x.x.x:31353 check weight 48
>> > >     server slot_8_checker 10.x.x.x:31839 check weight 33
>> > >     server slot_9_checker 10.x.x.x:31854 check weight 44
>> > >     server slot_10_checker 10.x.x.x:31794 check weight 60 disabled
>> > >     server slot_11_checker 10.x.x.x:31561 check weight 56
>> > >     server slot_12_checker 10.x.x.x:31814 check weight 57
>> > >     server slot_13_checker 10.x.x.x:31535 check weight 44 disabled
>> > >     server slot_14_checker 10.x.x.x:31829 check weight 43 disabled
>> > >     server slot_15_checker 10.x.x.x:31655 check weight 40 disabled
>> > >
>>
>>
>>
>> --
>>  (o-    Julien Pivotto
>>  //\    Open-Source Consultant
>>  V_/_   Inuits - https://www.inuits.eu
>>
>

Reply via email to