Hi Anthony,

I map the word 'message' to 'flow' and not to NetFlow packet, please
correct me if this assumption is wrong. 55m flows/min makes it roughly
1m flows/sec. I would not recommend stretching a single nfacctd daemon
beyond beyond 200K flows/sec and the beauty of NetFlow, being UDP, is
that it can be easily scaled horizontally. For a start, details and
complexity may vary from use-case to use-case, I would hence recommend
to look in the following direction: point all NetFlow to a single IP/
port where a nfacctd in replicator mode is listening. You should test
it being able to absorb the full feed on your CPU resources. Then you
replicate to nfacctd collectors downstream parts of the full feed, ie.
you can instantiate with some headroom around 6-8 nfacctd collectors.
You can balance the incoming NetFlow packets using round-robin or
assigning flow exporters to flow collectors or with some hashing. Here
is how to start with it:

https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L1384-L1445

Of course you can do the same with your load-balancer of preference.

Paolo

On Thu, Nov 16, 2017 at 01:16:48PM -0500, Anthony Caiafa wrote:
> Hi! So my usecase may be slightly larger than most. I am processing 1:1
> netflow data for a larger infrastructure. We are receiving about 55million
> messages a minute which isn’t much but through pmacct it seems to not like
> it so much. I have pmacct scheduled with nomad running across a few
> machines and 2 designated ports accepting the flow traffic and outputting
> those to kafka.
> 
> About every 5m or so pmacct dies and restarts basically dropping all
> traffic for a short period of time. The two configurations i have that are
> doing anything every 5 minutes are:
> 
> kafka_refresh_time[name]: 300
> kafka_history[name]: 5m
> 
> 
> So i am not sure if its one of these or not since the logs only indicate
> that it lost a connection to kafka and thats about it.

> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to