Hi Mikhail,

For the export (pmacctd) part let me point you to Q7 of the FAQS doc:


Specifically PF_RING and ZeroMQ-based internal buffering (for this last
part grep 'ZeroMQ' in the QUICKSTART document). 

For the collection (nfacctd) part: if, for your project, you can use
NetFlow v5 or sFlow exports (which are both not template-based) then you
could rely on SO_REUSEPORT. Although described in the context of BGP
collection, you can re-use  the idea for flow collection:


If, instead, using NetFlow v9/IPFIX (which are bot template-based) then
we may want to resort to some finer idea so that a replicator (nfacctd
with 'tee' plugin) set up as balancer for the actual nfacctd collectors.
We can follow-up if this is the case and i'd like to better understand
the export part, ie. sharing your config would help.


On Thu, May 16, 2019 at 11:54:17AM +0200, Mikhail Sennikovsky wrote:
> Hi all,
> We've were experimenting with pmacctd/nfacctd-based IP traffic
> accounting recently, and have faced some issues with handling small
> packet floods by pmacctd/nfacctd in our setup.
> Would be great if someone here could suggest us how we could overcome them.
> Our goal was actually to precisely account the amount of traffic being
> sent to and from each IP used by a set of "client" hosts sitting
> behind the "router" host, which routes traffic to/from them.
> In our test setup the pmacctd was running on that "router" host,
> sniffing on its outbound interface, and then sending the netflow data
> to the nfacctd running on a "collector" host.
> So we've experienced two main problems when some "client" host started
> to flood some small, e.g. tcp syn flood (this does not have to be
> exactly tcp syn flood however, e.g. flooding small udp packets each
> using different source port would work as well):
> 1. top reported ~50% cpu utilization of pmacctd processes, and started
> reporting packet drops (dropped_packets value reported by SIGUSR1
> handler)
> 2. pmacctd started producing significant amount of netflow traffic,
> which was eventually dropped by the nfacctd on the "collector" host
> (netstat -su reporting the increasing number of udp receive buffer
> errors, while increasing the nfacctd_pipe_size to 2097152 made the
> situation better, but still did not make the drops go away
> completely).
> Both of the above (apparently) resulted in decrease in preciseness of
> our traffic measurements.
> Had someone else here experienced similar issues, and/or could perhaps
> suggest some ways of overcoming them?
> Perhaps given that we do not need the information on each an every
> "flow", but rather just the precise info on overall packets/bytes
> being sent to/from a specific IP, it might be possible to adjust our
> setup to tolerate such flooding?
> Thanks,
> Mikhail
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

pmacct-discussion mailing list

Reply via email to