The culprit was actually the template file.  It appears to block while writing 
and it's really slow.  When I remove the configuration option one process could 
do what I could not accomplish using 80 processes with each using a template 
file.

Any consideration on a different implementation?

Writing it out on a configurable interval would be a simple improvement.

When load-balancing, particularly with SO_REUSEPORT, it would be nice to allow 
them to communicate the template set to each other.  Perhaps another use for 
zeromq?

Brian



‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Sunday, February 24, 2019 5:02 PM, Paolo Lucente <pa...@pmacct.net> wrote:

>
>
> Hi Brian,
>
> You are most probably looking for this:
>
> https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
>
> Should that not work, ie. too many input flows for the available
> resources, you have a couple load-balancing strategies possible:
> one is to configure a replicator (tee plugin, see in QUICKSTART).
>
> Paolo
>
> On Sun, Feb 24, 2019 at 05:31:55PM +0000, Brian Solar wrote:
>
> > Is there a way to adjust the UDP buffer receive size ?
> > Are there any other indications of nfacctd not keeping up?
> > cat /proc/net/udp |egrep drops\|0835
> > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid 
> > timeout inode ref pointer drops
> > 52366: 00000000:0835 00000000:0000 07 00000000:00034B80 00:00000000 
> > 00000000 0 0 20175528 2 ffff89993febd940 7495601
> > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > sysctl -a |fgrep mem
> > net.core.optmem_max = 20480
> > net.core.rmem_default = 212992
> > net.core.rmem_max = 2147483647
> > net.core.wmem_default = 212992
> > net.core.wmem_max = 212992
> > net.ipv4.igmp_max_memberships = 20
> > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > net.ipv4.tcp_rmem = 4096 87380 6291456
> > net.ipv4.tcp_wmem = 4096 16384 4194304
> > net.ipv4.udp_mem = 9252429 12336573 18504858
> > net.ipv4.udp_rmem_min = 4096
> > net.ipv4.udp_wmem_min = 4096
> > vm.lowmem_reserve_ratio = 256 256 32
> > vm.memory_failure_early_kill = 0
> > vm.memory_failure_recovery = 1
> > vm.nr_hugepages_mempolicy = 0
> > vm.overcommit_memory = 0
>
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists



_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to