Hi Matt,

The snaplen is meant for the pmacctd daemon, which is the libpcap-based
daemon of the set. You can compare pmacctd to tcpdump in the fact they
are both libpcap-based - then they do a slightly different job and produce
different output. 

nfacctd is a daemon listening on a port and receiving, processing NetFlow/
IPFIX packets - by default it reads full packets. What may be the issue
though is precisely what you say with buffers & kernel. There is too much
packets for the available buffers (and machine speed). You can tune all
of this up, please read "XXII. Miscellaneous notes and troubleshooting
tips", section "e". It tells how to verify whether there is packet drops
between kernel and application, which buffer to tune and how to do it,
ie. nfacctd_pipe_size plugin_buffer_size plugin_pipe_size, and how to
mangle with the /proc filesystem, ie. /proc/sys/net/core/[rw]mem_max.

Finally, should none of this help, please don't hesitate to get in touch
privately; i'd be happy to support you by having a look myself connecting
remotely to your testbed.

Cheers,
Paolo

On Thu, Jul 07, 2016 at 02:50:12AM +0000, matt zulawski wrote:
> Evening,
> 
> Working with nfacctd today for the first time. Goal is to log semi-raw flow
> stats to a file in five minute chunks using the print plugin. Currently
> using this aggregation key:
> aggregate: src_host, dst_host, src_port, dst_port, export_proto_seqno
> 
> We want to use the seqnum to detect whether or not we have lost flows.
> 
> History:
> When I run this kind of tcpdump on the server:
> tcpdump -i mgmt0 port 9996 -s 2000
> 
> I get 100% packet delivery, 0 kernel loss. When I take the seqnums out of
> that file, there is a static difference of 28 per seqnum, which is the
> number of flows exported per frame.
> 
> The snaplen of 2k is critical for this to work properly. Each packet is
> 1.4k big, so this is a safe snaplen value. If I leave the tcpdump default
> of 65k snaplen, I immediately start experiencing packet loss at the kernel
> level.
> 
> nfacctd problems:
> When I drum up a job in nfacctd to dump files every 5 minutes, I start
> seeing big gaps in sequence numbers, similar to when I left the default
> snaplen at 65k in tcpdump. But when I check the official nfacctd docs, the
> snaplen parameter does not gel with nfacctd. :(
> 
> This is the config I'm using:
> =====
> nfacctd_ip: xxxxxxxx
> nfacctd_port: xxxx
> plugin_buffer_size: 1310720
> plugin_pipe_size: 134217728
> nfacctd_disable_checks: false
> !
> plugins: print
> !
> aggregate: src_host, dst_host, src_port, dst_port, export_proto_seqno
> !
> print_refresh_time: 60
> print_history: 1m
> print_output: csv
> print_output_file: /home/mzadmin/flows-%Y%m%d-%H%M.txt
> print_output_file_append: true
> print_history_roundoff: m
> !
> =====
> 
> The server is receiving approximately 500 UDP packets per second, which I
> _think_ translates to approximately 14,000 flows per second.
> 
> I think the key is adding some tuning parameters to my nfacctd.conf. I've
> just splashed in some numbers into buffer and pipe, but I don't know if
> those are ideal or if there are any other settings I should be tweaking.
> 
> Anyone have some advice? I swear I'm rtfm, I just need an extra push!
> 
> 
> Thanks for your attention, and kind regards,
> -Matt Zulawski

> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to