Re: Theoretical limits for a HAProxy instance

2022-12-13 Thread Emerson Gomes
Hi,

Have you tried increasing the number of processes/threads?
I dont see any nbthreads or nbproc in your config.

Check out https://www.haproxy.com/blog/multithreading-in-haproxy/

BR.,
Emerson


Em seg., 12 de dez. de 2022 às 02:49, Iago Alonso 
escreveu:

> Hello,
>
> We are performing a lot of load tests, and we hit what we think is an
> artificial limit of some sort, or a parameter that we are not taking
> into account (HAProxy config setting, kernel parameter…). We are
> wondering if there’s a known limit on what HAProxy is able to process,
> or if someone has experienced something similar, as we are thinking
> about moving to bigger servers, and we don’t know if we will observe a
> big difference.
>
> When trying to perform the load test in production, we observe that we
> can sustain 200k connections, and 10k rps, with a load1 of about 10.
> The maxsslrate and maxsslconn are maxed out, but we handle the
> requests fine, and we don’t return 5xx. Once we increase the load just
> a bit and hit 11k rps and about 205k connections, we start to return
> 5xx and we rapidly decrease the load, as these are tests against
> production.
>
> Production server specs:
> CPU: AMD Ryzen 7 3700X 8-Core Processor (16 threads)
> RAM: DDR4 64GB (2666 MT/s)
>
> When trying to perform a load test with synthetic tests using k6 as
> our load generator against staging, we are able to sustain 750k
> connections, with 20k rps. The load generator has a ramp-up time of
> 120s to achieve the 750k connections, as that’s what we are trying to
> benchmark.
>
> Staging server specs:
> CPU: AMD Ryzen 5 3600 6-Core Processor (12 threads)
> RAM: DDR4 64GB (3200 MT/s)
>
> I've made a post about this on discourse, and I got the suggestion to
> post here. In said post, I've included screenshots of some of our
> Prometheus metrics.
>
> https://discourse.haproxy.org/t/theoretical-limits-for-a-haproxy-instance/8168
>
> Custom kernel parameters:
> net.ipv4.ip_local_port_range = "1276860999"
> net.nf_conntrack_max = 500
> fs.nr_open = 500
>
> HAProxy config:
> global
> log /dev/log len 65535 local0 warning
> chroot /var/lib/haproxy
> stats socket /run/haproxy-admin.sock mode 660 level admin
> user haproxy
> group haproxy
> daemon
> maxconn 200
> maxconnrate 2500
> maxsslrate 2500
>
> defaults
> log global
> option  dontlognull
> timeout connect 10s
> timeout client  120s
> timeout server  120s
>
> frontend stats
> mode http
> bind *:8404
> http-request use-service prometheus-exporter if { path /metrics }
> stats enable
> stats uri /stats
> stats refresh 10s
>
> frontend k8s-api
> bind *:6443
> mode tcp
> option tcplog
> timeout client 300s
> default_backend k8s-api
>
> backend k8s-api
> mode tcp
> option tcp-check
> timeout server 300s
> balance leastconn
> default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
> maxconn 500 maxqueue 256 weight 100
> server master01 x.x.x.x:6443 check
> server master02 x.x.x.x:6443 check
> server master03 x.x.x.x:6443 check
> retries 0
>
> frontend k8s-server
> bind *:80
> mode http
> http-request add-header X-Forwarded-Proto http
> http-request add-header X-Forwarded-Port 80
> default_backend k8s-server
>
> backend k8s-server
> mode http
> balance leastconn
> option forwardfor
> default-server inter 10s downinter 5s rise 2 fall 2 check
> server worker01a x.x.x.x:31551 maxconn 20
> server worker02a x.x.x.x:31551 maxconn 20
> server worker03a x.x.x.x:31551 maxconn 20
> server worker04a x.x.x.x:31551 maxconn 20
> server worker05a x.x.x.x:31551 maxconn 20
> server worker06a x.x.x.x:31551 maxconn 20
> server worker07a x.x.x.x:31551 maxconn 20
> server worker08a x.x.x.x:31551 maxconn 20
> server worker09a x.x.x.x:31551 maxconn 20
> server worker10a x.x.x.x:31551 maxconn 20
> server worker11a x.x.x.x:31551 maxconn 20
> server worker12a x.x.x.x:31551 maxconn 20
> server worker13a x.x.x.x:31551 maxconn 20
> server worker14a x.x.x.x:31551 maxconn 20
> server worker15a x.x.x.x:31551 maxconn 20
> server worker16a x.x.x.x:31551 maxconn 20
> server worker17a x.x.x.x:31551 maxconn 20
> server worker18a x.x.x.x:31551 maxconn 20
> server worker19a x.x.x.x:31551 maxconn 20
> server worker20a x.x.x.x:31551 maxconn 20
> server worker01an x.x.x.x:31551 maxconn 20
> server worker02an x.x.x.x:31551 maxconn 20
> server worker03an x.x.x.x:31551 maxconn 20
> retries 0
>
> frontend k8s-server-https
> bind *:443 ssl crt /etc/haproxy/certs/
> mode http
> http-request add-header X-Forwarded-Proto https
> http-request add-header X-Forwarded-Port 443
> http-request del-header X-SERVER-SNI
> http-request set-header X-SERVER-SNI %[ssl_

Re: Question about http compression

2022-02-21 Thread Emerson Gomes
Hi,

You're mixing up the concepts of TLS compression and HTTP compression. They
are different things.
Indeed TLS compression is not advised due to security concerns.

However, this has nothing to do with HTTP compression, which is normally
done using gzip or brotli algorithms, and specified as "Content-Encoding"
on the HTTP header.

HTTP compression is generally advised when you often provide highly
compressible files (like HTMLs) but keep in mind that it has a CPU cost
noticeable for very intense traffic sites. That's why sometimes you might
want to use HAProxy to compress HTTP responses to offload the CPU cost from
your backend server.

In HAProxy you can use http://www.libslz.org/, which provides ultra-fast
compression with the gzip algorithm.

BR.,
Emerson

Em seg., 21 de fev. de 2022 às 14:26, Tom Browder 
escreveu:

> I'm getting ready to try 2.5 HAProxy on my system and see http comression
> is recommended.
>
> I am running Apache 2.4.52 and have for years tried to keep its TLS
> security as good as possible according to what advice I get from the Apache
> docs and SSL Labs. From those sources I thought https should not use
> compression because of some known exploit, so I'm not currently using it.
> My sites get an A+ rating from SSL Labs testing.
>
> So, not being at all an expert, I plan not to use the compression
> (although I've always wanted to).  Perhaps I'm not as up-to-date as I
> should be (this is a hobbly, but it's an important one, although I can't
> spend the time on it I would like to).
>
> Your thoughts and advice are appreciated.
>
> -Tom
>


Re: Does haproxy utlize openssl with AES-NI if present?

2021-10-29 Thread Emerson Gomes
Hello,

If you want "definitive proof" that you're not using AES-NI instructions
during your benchmark, you could simply compile OpenSSL (and then HAproxy,
linking it to this OpenSSL version) passing "-noaes" flag to GCC in the
process.

Then, to make sure your compilation succeeded, check both resulting
binaries (haproxy + libcrypto.so) for any aesni instructions:

$ objdump --disassemble-all libcrypto.so | grep -E 'aes(enc|dec)'

There should be none.

BR.,
Emerson


Em sex., 29 de out. de 2021 às 00:09, Shawn Heisey 
escreveu:

> On 10/28/21 2:11 PM, Lukas Tribus wrote:
> > You would have to run a single request causing a large download, and
> > run haproxy through a cpu profiler, like perf, and compare outputs.
>
> I am learning all sorts of useful things. I see evidence of acceleration
> when pulling a large file with curl!  Average transfer speed is visibly
> lower with acceleration disabled.  First test is haproxy started
> normally, second is haproxy started with the environment variable to
> disable the aes-ni CPU flag:
>
> root@sauron:~# curl --ciphers ECDHE-RSA-AES256-GCM-SHA384
> https://server.domain.tld/4gbrandom > /dev/null
>% Total% Received % Xferd  Average Speed   Time Time Time
> Current
>   Dload  Upload   Total SpentLeft
> Speed
> 100 4096M  100 4096M0 0  63.4M  0  0:01:04  0:01:04 --:--:--
> 63.5M
> root@sauron:~# curl --ciphers ECDHE-RSA-AES256-GCM-SHA384
> https://server.domain.tld/4gbrandom > /dev/null
>% Total% Received % Xferd  Average Speed   Time Time Time
> Current
>   Dload  Upload   Total SpentLeft
> Speed
> 100 4096M  100 4096M0 0  52.2M  0  0:01:18  0:01:18 --:--:--
> 61.4M
>
> The file I transferred is 4GB in size, copied from /dev/urandom with
> dd.  Did the pull from another machine on the same gigabit LAN.  I
> picked the cipher by watching for TLS 1.2 ciphers shown by testssl.sh
> and choosing one that mentioned AES.  The server has plenty of memory to
> cache that entire 4GB file, so disk speed should be irrelevant.
>
> Thank you for hanging onto enough patience to help me navigate this
> rabbit hole.
>
> Thanks,
> Shawn
>
>


Re: Help

2021-07-07 Thread Emerson Gomes
Hello Anilton,

In the "bind *:443" line, do not specify a PEM file directly, but only the
directory where your PEM file(s) resides.
Also, make sure that both the certificate and private key are contained
within the same PEM file.

It should look like this:

-BEGIN CERTIFICATE-
   xxx
-END CERTIFICATE-
-BEGIN PRIVATE KEY-
  xxx
-END PRIVATE KEY-

BR.,
Emerson

Em qua., 7 de jul. de 2021 às 14:47, Anilton Silva Fernandes <
anilton.fernan...@cvt.cv> escreveu:

> Hi there.
>
>
>
> Can I get some help from you.
>
>
>
> I’m configuring HAProxy as a frontend on HTTPS with centified and I want
> clients to be redirect to BACKEND on HTTPS as well (443) but I want clients
> to see only HAProxy certificate, as the backend one is not valid.
>
>
>
> Bellow the schematic of my design:
>
>
>
>
>
>
>
> So, on
>
>
>
> This is the configuration file I’m using:
>
>
>
> [image: frontend haproxy mode http bind *:80 bind *:443 ssl crt
> /etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2 backend wso2 mode
> http option forwardfor redirect scheme https if !{ ssl_fc } server my-api
> 10.16.18.128:443 check ssl verify none http-request set-header
> X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto
> https if { ssl_fc }]
> [image: frontend web_accounts mode tcp bind 192.168.1.214:443
> default_backend accounts_servers frontend web_apimanager mode tcp bind
> 192.168.1.215:443 default_backend apimanager_servers backend
> accounts_servers balance roundrobin server accounts1 10.16.18.128:443 check
> server accounts2 10.16.18.128:443 check backend apimanager_servers balance
> roundrobin server accounts1 10.16.18.128:443 check server accounts2
> 10.16.18.128:443 check]
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> The first one is what works but we got SSL problems due to invalid
> certificates on Backend;
>
>
>
> The second one is what we would like, but does not work and says some
> erros:
>
> [ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind
> *:443' : unable to load SSL private key from PEM file '/etc/ssl/
> cvt.cv/accounts_cvt.pem'.
>
> [ALERT] 187/114337 (7823) : Error(s) found in configuration file :
> /etc/haproxy/haproxy.cfg
>
> [ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified
> for bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').
>
> [ALERT] 187/114337 (7823) : Fatal errors found in configuration.
>
> Errors in configuration file, check with haproxy check.
>
>
>
>
>
> This is on CentOS 6
>
>
>
> Thank you
>
>
>
>
>
>
>
>
>
> Melhores Cumprimentos
>
>
>
> *Anilton Fernandes | Plataformas, Sistemas e Infraestruturas*
>
> Cabo Verde Telecom, SA
>
> Group Cabo Verde Telecom
>
> Rua Cabo Verde Telecom, 1, Edificio CVT
>
> 198, Praia, Santiago, República de Cabo Verde
>
> Phone: +238 3503934 | Mobile: +238 9589123 | Email –
> anilton.fernan...@cvt.cv
>
>
>
> [image: cid:image001.jpg@01D5997A.B9848FB0]
>
>
>
>
>


Re: Replicated stick tables have absurd values for conn_cur

2019-01-15 Thread Emerson Gomes
Hi Willy, Tim,

I am providing some more details about my setup if you wish to try to
reproduce the issue.
As I mentioned before, I have 5 HAProxy nodes, all of them listening to
public IPs.
My DNS is setup with round-robin mode on AWS R53, resolving to one of the
HAProxy nodes individual IPs for each request.
It means that very commonly one client will have multiple connections with
many (or even) all nodes in the cluster - Also they do tend to
connect/disconnect fast (little keep-alive usage), making this racing
condition quite likely to happen.

I suppose the scenario Tim described earlier is accurate:

- Connect to peer A (A=1, B=0)
- Peer A sends 1 to B   (A=1, B=1)
- Kill connection to A  (A=0, B=1)
- Connect to peer B (A=0, B=2)
- Peer A sends 0 to B   (A=0, B=0)
- Peer B sends 0/2 to A (A=?, B=0)
- Kill connection to B  (A=?, B=-1)
- Peer B sends -1 to A  (A=-1, B=-1)


Let me know if some you wish to add some debugging info to the patch in
order to dump some extra information when this scenario happens.

BR.,
Emerson





Em ter, 15 de jan de 2019 às 11:50, Willy Tarreau  escreveu:

> Hello Emerson,
>
> On Mon, Jan 14, 2019 at 10:26:40PM +0100, Emerson Gomes wrote:
> > Hello Tim,
> >
> > Sorry for the delayed answer.
> > The segfaults I had experinced apparently were related to something else
> -
> > Maybe some issue in my env.
> > At first I tried to apply the patch to 1.9.0, but after applying it to
> > 1.8.7, I no longer had the segfaults.
> >
> > So far I yet haven't experienced the underflow issue again.
> > I think it would be nice to merge this change to next releases - Not sure
> > how this is managed around here without the tracking tool :)
>
> Thanks for the report! Tim, could you elaborate a little bit more on how
> the race reproduces ? I'm asking because if we only apply the underflow
> check, it will mean we'll constantly accumulate wrong values under load
> since until the counter crosses zero, the double discount is not detected.
> I'd rather be sure to address the cause (why do we decrement it twice)
> than the consequence (value becomes negative).
>
> thanks!
> Willy
>


Re: Replicated stick tables have absurd values for conn_cur

2019-01-14 Thread Emerson Gomes
Hello Tim,

Sorry for the delayed answer.
The segfaults I had experinced apparently were related to something else -
Maybe some issue in my env.
At first I tried to apply the patch to 1.9.0, but after applying it to
1.8.7, I no longer had the segfaults.

So far I yet haven't experienced the underflow issue again.
I think it would be nice to merge this change to next releases - Not sure
how this is managed around here without the tracking tool :)

BR.,
Emerson









Em sáb, 12 de jan de 2019 às 12:34, Tim Düsterhus 
escreveu:

> Emerson,
>
> Am 07.01.19 um 13:40 schrieb Emerson Gomes:
> > Just to update you, I have tried the patch, and while I didnt see any new
> > occurences of the underflow, HAProxy started to crash constantly...
> >
> > Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
> > Current worker #1 (14366) exited with code 139 (Segmentation fault)
> > Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
> > exit-on-failure: killing every workers with SIGTERM
> > Jan 07 10:32:37 afrodite haproxy[14364]: [WARNING] 006/103237 (14364) :
> All
> > workers exited. Exiting... (139)
> >
> > I am not sure if the segfaults are related to the patch - Continuing
> > investigation...
> >
>
> I only checked whether my patch compiled successfully, but not whether
> it actually worked. I did not find the time to take a deeper look yet,
> I'm afraid.
>
> Did you find out, whether the segfaults are caused by the patch and
> where exactly it segfaults?
>
> Best regards
> Tim Düsterhus
>


Re: Replicated stick tables have absurd values for conn_cur

2019-01-07 Thread Emerson Gomes
Hello Tim,

Just to update you, I have tried the patch, and while I didnt see any new
occurences of the underflow, HAProxy started to crash constantly...

Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
Current worker #1 (14366) exited with code 139 (Segmentation fault)
Jan 07 10:32:37 afrodite haproxy[14364]: [ALERT] 006/103237 (14364) :
exit-on-failure: killing every workers with SIGTERM
Jan 07 10:32:37 afrodite haproxy[14364]: [WARNING] 006/103237 (14364) : All
workers exited. Exiting... (139)

I am not sure if the segfaults are related to the patch - Continuing
investigation...

BR.,
Emerson


Em qui, 3 de jan de 2019 às 21:48, Emerson Gomes 
escreveu:

> Hello Tim,
>
> Thanks a lot for the patch. I will try it out and let you know the results.
>
> BR.,
> Emerson
>
> Em qui, 3 de jan de 2019 às 21:18, Tim Düsterhus 
> escreveu:
>
>> Emerson,
>>
>> Am 03.01.19 um 21:58 schrieb Emerson Gomes:
>> > However, the underflow scenario only seem to be possible if the peers
>> are
>> > sending relative values, rather than absolute ones.
>>
>> I don't believe so. My hypothetical timeline was created with absolute
>> values in mind.
>>
>> > Apparently both cases (absolut and offset values) exist.
>> > I am looking at src/peers.c to understand how the peer protocol works
>> and
>> > maybe create the patch you proposed (do not decrement counter if
>> already 0).
>>
>> I attached a patch which I believe fixes the issue (checking for 0 when
>> decrementing, not touching the peers).
>>
>> > However it seems that a real fix would require some big changes on the
>> > protocol itself.
>>
>> Yes I agree.
>>
>> > One potencial implementation I could imagine, would be to, rather than
>> > broadcasting absolute values or offsets, each neighbor peer could report
>> > the amount of connection it has locally only, and it would be up to the
>> > local node to resolve the actual value by adding up the different values
>> > received from all neighbors.
>>
>> Yes, that probably would be the most reliable implementation. It takes
>> up more memory and processing power, though.
>>
>> > Not even sure if my understading is correct, but it's task currently
>> out of
>> > my reach.
>> > Should I do a bug report somewhere? :)
>> >
>>
>> I suspect that the developers will notice this thread. A proper issue
>> tracker is a wish of mine as well
>> (https://www.mail-archive.com/haproxy@formilux.org/msg32239.html).
>>
>> Best regards
>> Tim Düsterhus
>>
>


Re: Replicated stick tables have absurd values for conn_cur

2019-01-03 Thread Emerson Gomes
Hello Tim,

Thanks a lot for the patch. I will try it out and let you know the results.

BR.,
Emerson

Em qui, 3 de jan de 2019 às 21:18, Tim Düsterhus 
escreveu:

> Emerson,
>
> Am 03.01.19 um 21:58 schrieb Emerson Gomes:
> > However, the underflow scenario only seem to be possible if the peers are
> > sending relative values, rather than absolute ones.
>
> I don't believe so. My hypothetical timeline was created with absolute
> values in mind.
>
> > Apparently both cases (absolut and offset values) exist.
> > I am looking at src/peers.c to understand how the peer protocol works and
> > maybe create the patch you proposed (do not decrement counter if already
> 0).
>
> I attached a patch which I believe fixes the issue (checking for 0 when
> decrementing, not touching the peers).
>
> > However it seems that a real fix would require some big changes on the
> > protocol itself.
>
> Yes I agree.
>
> > One potencial implementation I could imagine, would be to, rather than
> > broadcasting absolute values or offsets, each neighbor peer could report
> > the amount of connection it has locally only, and it would be up to the
> > local node to resolve the actual value by adding up the different values
> > received from all neighbors.
>
> Yes, that probably would be the most reliable implementation. It takes
> up more memory and processing power, though.
>
> > Not even sure if my understading is correct, but it's task currently out
> of
> > my reach.
> > Should I do a bug report somewhere? :)
> >
>
> I suspect that the developers will notice this thread. A proper issue
> tracker is a wish of mine as well
> (https://www.mail-archive.com/haproxy@formilux.org/msg32239.html).
>
> Best regards
> Tim Düsterhus
>


Re: Replicated stick tables have absurd values for conn_cur

2019-01-03 Thread Emerson Gomes
Hello Tim,

Thanks for your answer. Indeed it's a very plausible explanation.
And in my case I do have some clients very frequently establishing/aborting
connections to all of the 5 nodes, which is  increasing the odds of running
in the race condition and underflow issues.

However, the underflow scenario only seem to be possible if the peers are
sending relative values, rather than absolute ones.
Apparently both cases (absolut and offset values) exist.
I am looking at src/peers.c to understand how the peer protocol works and
maybe create the patch you proposed (do not decrement counter if already 0).

However it seems that a real fix would require some big changes on the
protocol itself.

One potencial implementation I could imagine, would be to, rather than
broadcasting absolute values or offsets, each neighbor peer could report
the amount of connection it has locally only, and it would be up to the
local node to resolve the actual value by adding up the different values
received from all neighbors.

Not even sure if my understading is correct, but it's task currently out of
my reach.
Should I do a bug report somewhere? :)

BR.,
Emerson

Em qui, 3 de jan de 2019 às 15:49, Tim Düsterhus 
escreveu:

> Emerson,
>
> Am 03.01.19 um 16:19 schrieb Emerson Gomes:
> > This works fine most of the time, but every now and then, when I check
> the
> > stick table contents, one or more IPs show up with an absurd number of
> > cunn_cur - Often around 4 Billion entries - A number very close to
> > the 32-bit unsigned int data type limit.
> >
>
> That looks like an integer underflow and a limitation of the peer
> protocol. If I understand it correctly the peer protocol always sends an
> absolute value to it's peers, instead of a relative modification
> operation such as "value++". While I'm not able to cause a connection to
> be decremented twice I am able to cause some connections to never be
> decremented because of a race condition:
>
> - Connect to peer A  (A=1, B=0)
> - Peer A sends 1 to B(A=1, B=1)
> - Connect to peer B  (A=1, B=2)
> - Peer B sends 2 to A(A=2, B=2)
> - Kill both connections at the same time (A=1, B=1)
> - Peer A sends 1 to B(A=1, B=1)
> - Peer B sends 1 to A(A=1, B=1)
>
> There are no connections remaining, but both peers believe that there
> still is one connection.
>
> To cause the underflow you are seeing I imagine the following happens,
> but I don't manage to get the timing right.
>
> - Connect to peer A (A=1, B=0)
> - Peer A sends 1 to B   (A=1, B=1)
> - Kill connection to A  (A=0, B=1)
> - Connect to peer B (A=0, B=2)
> - Peer A sends 0 to B   (A=0, B=0)
> - Peer B sends 0/2 to A (A=?, B=0)
> - Kill connection to B  (A=?, B=-1)
> - Peer B sends -1 to A  (A=-1, B=-1)
>
> An easy fix would probably be skipping the decrement if the value is
> already 0. The counter will be off either way, though.
>
> Best regards
> Tim Düsterhus
>


Replicated stick tables have absurd values for conn_cur

2019-01-03 Thread Emerson Gomes
Hello,

I have a setup witg 5 HAProxy v1.8.14-52e4d43, using replicated one
replicated sticky table. This is the relevant config:


peers cluster_hap
peer afrodite 10.0.0.2:7600
peer artemis 10.0.0.3:7600
peer atena10.0.0.4:7600
peer demeter  10.0.0.5:7600
peer minerva  10.0.0.6:7600

frontend https
bind *:443 tfo ssl crt /etc/haproxy/certs/xxx.pem alpn h2,http/1.1

acl local_ips src 172.17.0.0/16

stick-table type ip size 1000 expire 10s store conn_cur peers
cluster_hap
tcp-request connection track-sc0 src
tcp-request connection accept if local_ips
tcp-request connection reject if { src_conn_cur gt 100 }
tcp-request connection accept
tcp-request inspect-delay 1s
tcp-request content accept if local_ips
tcp-request content accept if { src_conn_cur le 20 }
tcp-request content accept if WAIT_END


This works fine most of the time, but every now and then, when I check the
stick table contents, one or more IPs show up with an absurd number of
cunn_cur - Often around 4 Billion entries - A number very close to
the 32-bit unsigned int data type limit.

[image: image.png]

Feels like a bug, but I am not sure how to report it, or if I am doing
something wrong in my setup, can you please advise?


BR.,
Emerson