Is there a way to send the devices decoded system time and uptime to Kafka?

Are there protections for flows stopping before they start? Or other time 
format errors?

I have yet to track down actual packets, but Ive seen usual timestamps and very 
old time stamps in Kafka. This makes nfacctd generate millions of entries at 
times.

Fortigate devices seem to report this way at times. I haven't noticed it on 
other devices as of yet.

-------- Original Message --------
On Feb 25, 2019, 9:28 AM, Paolo Lucente wrote:

Hi Brian,

Thanks very much for the nginx config, definitely something to add to
docs as a possible option. QN reads 'Queries Number' (inherited from the
SQL plugins, hence the queries wording); the first number is now many
are sent to the backend, the second is how many should be sent as part
of the purge event.

They should normally be aligned. In case of NetFlow/IPFIX, among the
different possibilities, it may reveal time sync issues among exporters
and the collector; easiest to resolve / experiment is to consider as
timestamp in pmacct the arrival time at the collector (versus the start
time of flows) by setting nfacctd_time_new to true.

Paolo

On Mon, Feb 25, 2019 at 03:23:42AM +0000, Brian Solar wrote:
>
> Thanks for the response Paolo. I am using nginx to stream load balance (see 
> config below).
>
> Another quick question on the Kafka plugin. What Does the QN portion of the 
> purging cache end line indicate/mean?
>
>
> 2019-02-25T03:05:04Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> 387033, QN: 12786/13291, ET: 1) ***
>
> 2019-02-25T03:16:22Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> 150221, QN: 426663/426663, ET: 19) ***
>
> # Load balance UDP-based FLOW traffic across two servers
> stream {
>
> log_format combined '$remote_addr - - [$time_local] $protocol $status 
> $bytes_sent $bytes_received $session_time "$upstream_addr"';
>
> access_log /var/log/nginx/stream-access.log combined;
>
> upstream flow_upstreams {
> #hash $remote_addr consistent;
> server 10.20.25.11:2100;
> #
> server 10.20.25.12:2100;
>
> }
>
> server {
> listen 2201 udp;
> proxy_pass flow_upstreams;
> #proxy_timeout 1s;
> proxy_responses 0;
> # must have user: root in main config
> proxy_bind $remote_addr transparent;
> error_log /var/log/nginx/stream-flow-err.log;
> }
> }
>
>
>
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:
>
> >
> >
> > Hi Brian,
> >
> > You are most probably looking for this:
> >
> > https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
> >
> > Should that not work, ie. too many input flows for the available
> > resources, you have a couple load-balancing strategies possible:
> > one is to configure a replicator (tee plugin, see in QUICKSTART).
> >
> > Paolo
> >
> > On Sun, Feb 24, 2019 at 05:31:55PM +0000, Brian Solar wrote:
> >
> > > Is there a way to adjust the UDP buffer receive size ?
> > > Are there any other indications of nfacctd not keeping up?
> > > cat /proc/net/udp |egrep drops\|0835
> > > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt 
> > > uid timeout inode ref pointer drops
> > > 52366: 00000000:0835 00000000:0000 07 00000000:00034B80 00:00000000 
> > > 00000000 0 0 20175528 2 ffff89993febd940 7495601
> > > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > > sysctl -a |fgrep mem
> > > net.core.optmem_max = 20480
> > > net.core.rmem_default = 212992
> > > net.core.rmem_max = 2147483647
> > > net.core.wmem_default = 212992
> > > net.core.wmem_max = 212992
> > > net.ipv4.igmp_max_memberships = 20
> > > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > > net.ipv4.tcp_rmem = 4096 87380 6291456
> > > net.ipv4.tcp_wmem = 4096 16384 4194304
> > > net.ipv4.udp_mem = 9252429 12336573 18504858
> > > net.ipv4.udp_rmem_min = 4096
> > > net.ipv4.udp_wmem_min = 4096
> > > vm.lowmem_reserve_ratio = 256 256 32
> > > vm.memory_failure_early_kill = 0
> > > vm.memory_failure_recovery = 1
> > > vm.nr_hugepages_mempolicy = 0
> > > vm.overcommit_memory = 0
> >
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
>
_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to